You're working a busy shift in TCC one Sunday afternoon when you get a page that EMS is bringing in a patient in cardiac arrest. The patient is a 57-year-old male with unknown past medical history who collapsed while at church. On EMS arrival at the scene, the patient was in ventricular fibrillation. He was defibrillated twice, devolved into PEA, then went back into ventricular fibrillation two more times en route. He arrives in ventricular fibrillation after a final failed attempt at defibrillation. A bolus of amiodarone has also been given, along with four rounds of epinephrine.
You resume CPR as you switch the patient over to your monitor. He is being bagged via a laryngeal airway device with a good waveform on capnography. You defibrillate the patient, which results in PEA. He has now been in cardiac arrest for twenty minutes, and you begin to wonder what other management options you have. You consider whether you should give sodium bicarbonate or calcium chloride given his prolonged cardiac arrest, but your attending tells you that neither treatment is beneficial (though you aren't sure epinephrine is beneficial either, and you keep giving that).
After a total of thirty minutes of downtime, the patient is now in asystole and the decision is made to call the code. As you leave the room, you wonder whether you should have given sodium bicarbonate after all. You figure the patient had probably become quite acidotic, which you know decreases catecholamine responsiveness, and think trying to ameliorate that acidosis would help the patient. Not satisfied with your attending's brush-off, you decide to search the literature yourself and see what evidence is out there...
PICO Question:
Population: Adult patients suffering out of hospital cardiac arrest. Special interest was paid to those with prolonged cardiac arrest
Intervention: Sodium bicarbonate administration
Comparison: Standard care
Outcome: Survival to hospital discharge with good neurologic function
Search Strategy:
The Clinical Queries tool in PubMed was searched using the terms “bicarbonate AND arrest,” resulting in 555 citations (https://tinyurl.com/yxj2682c). Of these, four articles were chosen. A search of the Cochrane Database did not identify any systematic reviews on this topic.
Bottom Line:
The debate surrounding the use of sodium bicarbonate in cardiac arrest is longstanding. Early Advanced Cardiac Life Support (ACLS) guidelines recommended routine bicarbonate administration for cardiac arrest, while more recent revisions have recommended against its routine use. Stoking this ongoing debate is a lack of rigorous evidence to direct practice.
Those randomized controlled trials on the topic either looked at outcomes of little importance to patients (Vukmir 2006)—such as return of spontaneous circulation (ROSC) or survival to the emergency department (ED)—or enrolled such a small sample of patients that clinical significance could not be determined (Ahn 2018). Vukmir et al found no difference in survival to ED admission (RR 0.99; 95% CI 0.70 to 1.40) among all patients, though they did demonstrate a trend toward improved survival in those patients with prolonged (> 15 minutes) cardiac arrest (RR 2.0; 95% CI 0.92 to 4.5). Unfortunately they did not look at long-term survival or neurologic outcomes. Ahn et al found no difference in ROSC or survival to hospital admission (RR 0.25; 95% CI 0.03 to 2.2), but the confidence intervals are so wide that they do not exclude a potentially clinically significant difference.
Larger observational studies have demonstrated conflicting results. Kawano et al (Kawano 2017) found a decrease in rates of survival to hospital discharge with bicarbonate administration (AOR 0.48, 95% CI 0.35-0.65) and a decrease in survival with a favorable neurologic outcome (AOR 0.61, 95% CI 0.43-0.86), after adjustment for multiple confounders. Kim et al (Kim 2016) demonstrated an increase in ROSC with bicarbonate administration (OR of 2.49; 95% CI 1.33 to 4.65) independent of other factors. While Kawano et al looked at more clinically appropriate outcomes, both studies were severely limited by a high risk of selection bias and imbalance with regards to both known and unknown confounders between the groups.
While there is not a substantial body of evidence to either support of refute the utility of bicarbonate administration in cardiac arrest, the bulk of the evidence does not suggest any benefit. Instead, providers should focus on those factors that have been shown to improve outcomes, such as high quality chest compressions with minimal interruptions and early defibrillation (when appropriate). Bicarbonate, meanwhile, should be reserved for specific case, such as hyperkalemia or suspected TCA overdose.
Evaluation of the Thoracic and Lumbar Spine in Blunt Trauma
Apr 04, 2019
Journal Club Podcast #48: February 2019
A look at the utility of history, physical exam, and plain radiography for detecting clinically significant injuries of the thoracic and lumbar spine…
You're working a shift at a level II trauma center in the community one rainy afternoon when EMS brings in Mr. Q, a 62-year-old man with hypertension and hyperlipidemia who was involved in a motor vehicle collision. He was the unrestrained driver in a car that hydroplaned on the highway and collided with the concrete barrier. His car spun around and was struck on the driver's side before coming to rest in the median. The driver had LOC and ended up in the passenger side of the car. His only complaint is a headache and chest pain. He arrives awake, alert, with abrasions and contusions to his upper face. He has no midline cervical, thoracic, or lumbar spine tenderness and he is neurologically intact.
Your next patient is Ms. P, a 47-year-old female with no significant past history who slipped on the wet stairs outside her apartment building. She ended up falling down four stairs, landing primarily on her buttocks. She complains of coccygeal and left hip pain, but denies back or neck pain. She has no midline tenderness in her cervical or thoracic spine, but does have tenderness in the midline lumbar spine, around L2/3. She is neurologically intact.
You consider your imaging options in both of these patients. For the first patient, you are planning to get a CT of his head, face, and cervical spine, and are considering a CT of the chest, abdomen, and pelvis as well, based purely on the mechanism of injury. You wonder if you even need to consider imaging of the thoracic and lumbar spine, given his lack of physical exam findings, and if so, wonder if you should get CT reconstructions or if plain films would suffice. In the second case, in addition to imaging of the pelvis and left hip, you are planning to get plain films of the lumbar spine. Again, you wonder if this is sufficient, or if you need to get more advanced imaging (such as a CT) to evaluate for fracture.
The first patient gets a "pan-scan" and is found to have isolated facial bone fractures, for which he is evaluated by ENT and eventually discharged home. The second patient is found to have no fractures on plain films, feels much better after ibuprofen, and also goes home. You still have questions about your imaging choices, and a quick look online directs you to a recent systematic review on the evaluation of thoracic and lumbar spine following blunt trauma. Wondering what other literature there is, you begin to conduct a more thorough search...
PICO Question:
Population: Adult patients suffering blunt trauma
Intervention: Aspects of history (e.g. mechanism of injury) and physical exam, plain radiography
Comparison: CT scan, surgical findings, follow-up
Outcome: Need for surgical intervention or TLSO bracing
Search Strategy:
A systematic review and meta-analysis, recently published by a collaboration of physicians that included one Washington University emergency physician and recent graduate of our residency program, was first identified. The bibliography of this review was searched to identify three additional relevant studies.
Bottom Line:
Evaluation for injury of the cervical spine following blunt trauma was made much easier by the derivation and subsequent validation of key clinical decision rules (NEXUS criteria, Canadian c-spine rule). Unfortunately, no such rule exists for evaluation of the thoracic and lumbar spine. At least one observational study from LA County/USC Medical Center (Inaba 2011) suggested that physical exam alone performed poorly at evaluating for a “clinically significant” injury of the thoracic or lumbar spine, with with a sensitivity of 78.6%, specificity of 83.4%, LR+ of 4.73, and LR- of 0.26. A recent systematic review on this topic (VandenBerg 2018) similarly found that aspects of the history and physical exam (when looked at independently) were inadequate at ruling in or out disease. Mechanism of injury had a pooled LR+ ranging from 0.5 to 1.7 and LR- of 0.63 to 1.25. There was no negative finding on physical examination that significantly reduced the probability of finding a TL-spine fracture, although the presence of a palpable spine deformity was good at ruling in a fracture, with a LR+ of 15.3.
Various imaging modalities have also been evaluated, with some studies suggesting that plain films alone are inadequate to detect injury. One study looking at trauma to the thoracic spine (Karul 2013) found that plain radiography had a LR+ of 1.09 and LR- of 0.93, suggesting that such films are useless whether they are positive or negative for fracture. Unfortunately, this study was severely limited by incorporation bias (CT was the ultimate gold standard) and spectrum bias, as the study only included patients with a thoracic spine deformity or step-off on exam who were still having pain within 10 days. This study did not address the utility of plain films in patients at lower risk of injury.
The systematic review by VandenBerg et al identified additional studies looking at the diagnostic accuracy of various imaging modalities. Five studies evaluated the accuracy of plain films of T and L-spine, with a pooled LR+ of 25.0 (95% CI 4.1-152.2) and LR- of 0.43 (95% CI 0.32-0.59) for diagnosis of injury. Similar likelihood ratios were found when pooling studies looking only at the thoracic spine or only at the lumbar spine. Studies evaluating CT of the chest, abdomen, and pelvis and those looking at reformatted thoracic and lumbar CT found a high degree of accuracy with either modality. As noted by the authors of the review, this evidence is based primarily on retrospective studies at high risk of incorporation bias. Additionally, many of these studies used as their outcome a “significant injury,” which included both the need for surgery or the need for TLSO bracing. Recent research suggests that TLSO bracing (primarily used for burst fractures) is not beneficial (Bailey 2014), and this outcome may not be as patient-centered as intended.
One clinical decision rule has been derived (Inaba 2015), with resulting sensitivity and specificity of 98.9% and 29.0%, respectively. This corresponds to a negative likelihood ratio of 0.04, suggesting the rule could significantly reduce the risk of a clinically significant fracture when negative. Unfortunately, this rule has not been validated (Level IV CDR), and the potential impact of the rule has not been evaluated. The final clinical decision rule consisted of the following criteria:
1. High-risk mechanism
2. Findings of pain, tenderness to palpation, deformity, or neurologic deficit
3. Age ≥ 60
Future research will be needed to validate this rule in multiple settings, and should be aimed at determining the impact of the rule to ensure it improves outcomes or reduces unnecessary imaging without worsening outcomes. Until then, it seems reasonable to continue using clinical acumen to determine who needs imaging of the thoracic or lumbar spine following blunt trauma, with a lower threshold to at least get plain films in those felt to be at low risk of injury. In patients felt to be higher risk, it seems prudent to forego plain radiography and proceed to CT scanning.
Cricoid Pressure During RSI in the ED
Feb 26, 2019
Journal Club Podcast #47: January 2019
The Selleck Maneuver?
A brief look at whether cricoid pressure (AKA the Selleck Maneuver) decreases aspiration risk or impedes airway visibility…
You are working a shift in TCC one day when the level 1 pager goes off. EMS is bringing in a forty-year-old male involved in a rollover MVC. He was unrestrained, has a blood pressure of 70/40, and has been intermittently combative. You gather your team, grab the ultrasound, and make sure the CMAC is in the room and ready for intubation.
The patient arrives in extremis. His combativeness has resolved as he is now unresponsive. He is breathing spontaneously, does not respond to questions, and has sonorous respirations. His GCS is 5. As the team continues the primary survey, all are in agreement that the patient requires intubation to secure his airway.
A quick assessment from the head of the bed, as the nurses draw up medications, reveals a large laceration to the left parietal scalp with minimal bleeding, left periorbital ecchymosis and edema, and 4 mm pupils that are round and reactive to light. He has no signs of neck or intraoral injury. His O2 sat is 98% on a nonbreather and his blood pressure is now 82/38.
The meds are ready and the team is prepared for the intubation. As he is paralzyed and sedated, you gently remove the cervical collar and ask your colleague to provide inline stabilization. As you open the airway and begin to insert the laryngoscope, you hear somebody question why no cricoid pressure is being administered. As you are busy trying to keep the patient alive, you ignore the ensuing argument and insert a endotracheal tube between the vocal cords without incident.
Later, the team discusses the utility of cricoid pressure (aka the Sellick Maneuver) while the patient is undergoing CT scans of his entire body. You have been taught that cricoid pressure is not really helpful, but wonderful if there is actual evidence to back this up. As you begin to search the internet, you find a very recent article published on the topic, with some very in-depth reviews. Not content with this single article, you diver deeper into the subject...
PICO Question:
Population: Adults patients requiring endotracheal intubation under RSI in the emergency department for any reason.
Intervention: Cricoid pressure (AKA the Sellick Maneuver).
Comparison: No cricoid pressure.
Outcome: Aspiration pneumonitis, hospital-acquired pneumonia, ventilator associated pneumonia, ease of intubation, hypoxia during intubation attempt, failed intubation.
Search Strategy:
A PubMed search using the terms "cricoid pressure" OR "Sellick maneuver” resulted in 472 citations (https://tinyurl.com/ycb6v3dl), from which the 4 most relevant articles were selected. The Cochrane Database of Systematic Reviews was also searched; this resulted in a single systematic review. As this review only included one article, the review was omitted.
Bottom Line:
For years after Dr. Sellick first described his maneuver in 1961 (Sellick 1961), cricoid pressure was touted for use during endotracheal intubation to theoretically prevent the aspiration of stomach contents by compressing the upper esophagus. Despite a lack of clinical evidence, this maneuver was widely used by both anesthesiologists (Howells 1983, Thwaites 1999) and emergency physicians (Kovacs 2004, Gwinnut 2015). Over the years, several studies have evaluated the efficacy and harm associated with cricoid pressure, with varying results.
In 2005, Turgeon et al performed a randomized controlled trial evaluating the effect of cricoid pressure on failure to intubate within 30 seconds among patients undergoing elective procedures in the operating room (Turgeon 2005). They found no significant differences in failure rates (RR 1.2, 95% CI .058 to 2.5) with and without cricoid pressure.
A cross-sectional study published in 2013 (Oh 2013) evaluated the effect of cricoid pressure on glottic view by recording images of the glottis at varying degrees of cricoid pressure using a video laryngoscope. While they did find a decrease in the medial area of the glottic view with increasing pressure, they did not evaluate the effect of this on the ease or success of intubation.
More recently, two randomized controlled trials were undertaken to assess the effect of cricoid pressure on rates of aspiration. In the first of these (Bohman 2018), patients undergoing elective surgical procedures were randomized to receive or not receive cricoid pressure. Following intubation, all patients had 5 mL of sterile saline infused into the endotracheal tube, followed by aspiration via a sterile suction catheter. Pepsin A testing was then performed on the aspirate to determine whether micro-aspiration had occurred or not. The presence of significant pepsin A was similar between the groups (RR 0.77, 95% CI 0.33 to 1.8) suggesting to difference in micro-aspiration rates. There was also no difference in the incidence of healthcare-associated pneumonia or ventilator-associated pneumonia.
The second trial published last year (Birenbaum 2018) also randomized patients undergoing surgery to cricoid pressure or sham cricoid pressure. Pulmonary aspiration was defined as either visual detection of aspiration during laryngoscopy or by tracheal aspiration following tracheal intubation. Again, the incidence of aspiration was not difference between the two groups (RR 0.90, 95% CI 0.33 to 2.38). Unfortunately, the study was designed as a noninferiority trial, but the incidence of aspiration was much lower than anticipated (0.5% overall). This resulted in the upper limits of the 1-sided 95% confidence interval exceeding the a priori noninferiority margin of 1.5, and the study could not conclude that omitting cricoid pressure was not inferior to its use.
Unfortunately, none of the studies we reviewed was conducted in the emergency department (all were conducted in the operating room) and questions of external validity make it difficult to generalize these results to our patient population. Additionally, while these four studies only represent a small portion of the literature on this topic, there has been no direct evidence that cricoid pressure reduces the risk of patient-centered outcomes (i.e. aspiration pneumonitis, healthcare-associated pneumonia, ventilator-associated pneumonia, ARDS).
Utility of the Vaginal Exam in First Trimester Pain or Bleeding
Dec 12, 2018
Journal Club Podcast #46: October 2018
A look at the evidence for and against the routine performance of pelvic examination for abdominal pain or vaginal bleeding in early pregnancy…
You are working a busy afternoon shift in EM-2, and have just completed your tenth pelvic exam of the day, when you go in to see yet another patient with a pelvic complaint. You encounter a pleasant, 25-year-old woman who is nine weeks pregnant with a very desired pregnancy. She reports light vaginal bleeding without passage of tissue for the last six hours. She denies any lightheadedness or dizziness and reports only mild, intermittent, lower abdominal cramping. She has only gone through two pads since the bleeding began.
On exam she has stable vital signs has no abdominal tenderness to palpation. Her bedside ultrasound reveals a live IUP with a heart rate of 150. Her quantitative HCG is 8,000 and her blood type reveals that she is A positive. You present the patient to your attending and show her the ultrasound images. When she asks you what the pelvic exam revealed, you admit that you haven’t done it yet and dutifully trudge back to patient’s room like a child whose been sent to the principal’s office.
The pelvic exam reveals a closed cervical os with minimal blood in the vaginal vault and the patient ends up being discharged with bleeding precautions. As you bid her farewell, you wonder if you really needed to do that pelvic exam at all. You’re pretty sure the patient didn’t enjoy it and you certainly could have done without it, and you wonder if there’s any evidence to support of refute the utility of the pelvic exam in the evaluation of vaginal bleeding in early pregnancy. You vow to do some digging to support your hypothesis that it is an unnecessary, and uncomfortable, waste of time…
PICO Question:
Population: Pregnant women < 20 weeks gestational age with vaginal bleeding or abdominal pain
Intervention: Omission of pelvic (speculum and/or bimanual) examination in the ED
Comparison: Standard of care, including full pelvic examination
Outcome: Change in management or disposition, missed ectopic pregnancy, need for intervention (e.g. manual vacuum aspiration, dilatation and curettage)
Search Strategy:
PubMed was searched using the terms “((pelvic OR vaginal) AND examination) AND early pregnancy” limited to clinical trials (https://tinyurl.com/yda24s5t). This resulted in 74 citations, from which four articles were chosen.
Bottom Line:
Vaginal bleeding and abdominal pain are frequent complaints seen in the ED during early pregnancy. Typical evaluation consists of a pelvic ultrasound to confirm the presence of an intrauterine pregnancy (IUP), often accompanied by a pelvic examination (speculum and bimanual) to evaluate the extent of bleeding and to confirm a closed cervical os. Give the time consumed performing the pelvic examination and the perceived discomfort experienced by the patient, some have called into question the utility of this portion of the work-up.
Unfortunately, there is little research into this question, and what evidence exists is mostly of low quality. Three prospective observational studies were identified, though two of these (Johnstone 2013, Hoey 2004) were severely limited by the lack of a pelvic ultrasound during the ED stay to confirm an IUP. Given that our primary diagnostic modality in these patients is ultrasound to confirm an IUP, the results of these studies are of little value (external validity). The third observational study (Seymour 2010) only enrolled pregnant patients of 16 weeks gestational age or less with a confirmed IUP on ultrasound. They found that the pelvic examination did not affect patient disposition, but did not look at the effect on management outside of this (e.g. need for manual vacuum aspiration, dilatation and curettage) or the timing of follow-up.
The fourth article reviewed (Linden 2017) was a prospective randomized controlled trial conducted at two academic ED’s in Boston and Washington, D.C. Pregnant patients < 16 weeks gestational age with vaginal bleeding or abdominal pain and with a documented IUP were randomized to either undergo a pelvic examination omitted or to have one performed. The incidence of the primary outcome (a 30-day composite that included need for further treatment or intervention, unscheduled return visits to the ED or clinic, need for hospital admission, emergency procedure transfusion, infection, or subsequent identification of other source of symptoms occurred with similar frequency in the no pelvic exam group (19.6%) and the pelvic exam group (22.0%) for an absolute risk reduction (ARR) of -2.4% (95% CI -11.8% to 7.1%). Unfortunately, this study was limited by its small size as well as its chosen outcomes. While it assessed many sources of comorbidity, it did not address the potential need for an urgent procedure among those patients with limited follow-up, the potential for missed infectious diagnoses, or the long-term effects of delayed treatment and/or diagnosis. While this subject remains controversial, there is insufficient evidence to recommend omitting the pelvic examination in this population of patients.
BNP in the Evaluation of Syncope
Oct 08, 2018
Journal Club Podcast #45: August 2018
A look at the use of BNP to differentiate cardiac from non-cardiac causes of syncope...
You are working an evening shift in your ED as the senior resident when you encounter Mr. Drop, a 58-year-old male presenting with syncope. He was sitting at the dinner table, getting ready to eat, when he suddenly lost consciousness and fell face-first into his mashed potatoes. He had no seizure activity and awoke within a few seconds. His wife helped him clean the mashed potatoes off his face and brought him to the ED. His past medical history includes hypertension and diet-controlled diabetes, and he endorses a family history of coronary artery disease in both parents. He denies chest pain, shortness of breath, or palpitations, and his vital signs and physical exam are currently normal.
You go to put in orders, including an ECG, BMP, CBC, and troponin, when your large Russian attending, whose hairstyle reminds you of Eddie Munster, suggests you order a BNP, “to help determine if this is cardiac syncope.” In your years of training, this is the first time anyone has suggested such a thing, but not wanting to anger this dangerous-looking attending, you comply without question. Later, when you’re certain he’s not looking, you open up PubMed and begin searching the evidence to see if there is actually any correlation between BNP levels and cariogenic syncope…
PICO Question:
Population: Adult patients presenting to the ED with syncope of unclear etiology
Intervention: BNP testing
Comparison: Standard ED workup
Outcome: Diagnostic accuracy of BNP for potentially life-threatening causes of syncope (e.g. arrhythmia or structural cardiac disease), impact of BNP testing on admission rates, impact of BNP on diagnostic testing.
Search Strategy:
PubMed was searched using the terms “(BNP OR “brain natriuretic peptide”) AND syncope” (https://tinyurl.com/yc4ta5e9). This resulted in 35 citations, from which the 4 most relevant articles were chosen.
Bottom Line:
Syncope is a common presenting complaint in the ED with a large number of potential underlying causes, ranging from the very benign (e.g. vasovagal syncope, orthostasis) to potentially life-threatening causes (e.g. cardiac arrhythmia). While studies have demonstrated a broad range for rates of serious outcomes, as wide as 1.2% to 36.2%, there is very little objective data to help the emergency physician determine who is safe for discharge and who requires admission to prevent morbidity and mortality. Several clinical prediction rules have been developed to assist with disposition decisions; unfortunately, none of these has performed well when external validation has been attempted (Birnbaum 2008, Serrano 2010, Safari 2016). As a result, investigators continue to search for potential prognostic factors to help differentiate those syncope patients at high risk of adverse outcome from those safe for discharge.
Brain natriuretic peptide (BNP) and its analogs are typically elevated in the setting of structural heart disease, but has also been show to increase following certain cardiac arrhythmias. As a result, BNP has been proposed as a possible test to help differentiate cardiogenic from non-cardiogenic causes of syncope. A small, prospective, observational study conducted in Scotland in 2007 (Reed 2007) demonstrated a significant association between BNP values and serious adverse outcome. Unfortunately, the test characteristics for this association were rather poor. To predict a serious outcome at 3 months (including death, MI, life-threatening arrhythmia, implantation of a pacemaker or defibrillator, PE, CVA, intracranial hemorrhage, need for blood transfusion or need for acute surgical procedure or endoscopic intervention), a BNP > 100 had a positive likelihood ratio (LR+) of 2.21 and a negative likelihood ratio (LR-) of 0.48, suggesting that neither a positive nor negative test at this threshold would have any significant impact on post-test probability. For a threshold of 1000, BNP had a LR- of 0.51, which again would be of little help. While the LR+ was infinity, there were only 3 patients with a BNP this high. A larger study would need to confirm the potential utility of a BNP this elevated, though it would likely impact only a small percent of patients presenting with syncope (most of whom would likely be admitted anyway).
Based on this observed association, these authors conducted an additional study (Reed 2010) in which they attempted to derive and validate a clinical decision rule employing BNP results as one of the criteria. They developed the ROSE rule, which employed the following criteria:
BNP level ≥ 300 pg/mL
Bradycardia ≤ 50 in ED or pre-hospital
Rectal examination with fecal occult blood
Anemia (hemoglobin ≤ 90 g/L)
Chest pain associated with syncope
ECG showing Q-wave (not in lead III)
Saturation ≤ 94% on room air
This rule was derived in 529 ED patients presenting with syncope. In the derivation cohort, the rule had a LR+ of 3.5 and LR- of 0.1 , suggesting that it may be useful in reducing the probability of a serious outcome in patients with none of the criteria, with a small increase in the probability in patients for whom the rule is positive. The rule performed slightly less well in the validation cohort (N = 550), with a LR+ of 2.5 and LR- of 0.2. Unfortunately, these results are skewed by several sources of bias: only 13% of patients underwent rectal examination (which was a component of the rule); a non-consecutive sample of patients was enrolled, with 35% and 40% of eligible patients not approached for enrollment in the derivation and validation cohorts, respectively; and the rule was derived and validated at a single study site, requiring additional validation at multiple sites before it can be used safely. Previously, the San Francisco syncope rule was validated at a single site, but then failed to validate when tested at additional sites.
Two additional prospective studies evaluating the diagnostic accuracy of BNP or its analogs were reviewed. The first of these (Pfister 2012) enrolled patients with syncope admitted to a cardiac unit in Cologne, Germany. They evaluated the accuracy of NT-pro-BNP for identifying patients with an arrhythmia or structural cardiac/cardiopulmonary abnormality. With 161 patients enrolled, the study found a LR+ of 1.86 and LR- of 0.20. The second study (Isbitan 2016) enrolled patients at two emergency departments in New Jersey and evaluated the accuracy of BNP at determining “serious outcomes” (which were similar to those listed for the two studies by Reed). Using a cutoff of 250, the authors found a LR+ of 5.02 and LR- of 0.57. Again, these two studies demonstrate rather poor positive and negative likelihood ratios which would not be expected to have a significant effect on probability. Additionally, the first study likely suffers from spectrum bias, as it was conducted on a cohort of patients already admitted to a cardiac unit rather than undifferentiated ED patients.
Overall, the current evidence for the use of BNP in the work-up of syncope is rather limited and not very promising. All of the studies were limited by the lack of a clear gold standard in the evaluation of syncope and by a non-uniform application of additional testing leading to differential verification bias and partial verification bias. The diagnostic accuracy of BNP in these studies was poor, with a LR- ranging from 0.2 to 0.57 and LR+ ranging from 1.86 to 5.02; although LR+ was infinity when a high enough BNP threshold was used, such a cutoff would likely be of little value, as it would be unlikely to change management and would apply to a rather small number of patients. The ROSE rule, while promising on derivation, had a LR- of 0.2 when validated; additional validation in other clinical settings would be needed to determine whether it would be reasonable to use. For now, BNP and the ROSE rule should not be used to determine disposition or direct further evaluation of patients presenting to the ED with syncope.
Balanced Fluids: The Wrong "Solution"
Aug 20, 2018
Journal Club Podcast #44: July 2018
A (somewhat)restrained rant on the importance of looking at the details and not combining outcomes to improve precision...
It’s another busy day TCC, when an elderly female rolls in from triage with fever, cough, and a new oxygen requirement. Her vitals are T 38.3 BP 90/42, HR 115, RR 24, SpO2 88% on RA. Even before you see the patient you are concerned for pneumonia with severe sepsis. You institute early antibiotics, fluids, serial lactates and systematically begin to aggressively resuscitate her. The patient requires nearly five liters of normal saline before her blood pressure stabilizes. Proud of your resuscitation, you tweet out #crushingsepsis and #normalsaline4life which gets an immediate response from Dr. Evan Schwarz, who happened to be trolling your twitter feed. He tweets “More like #increasedrenalfailure and #trybalancedfluids”. Inspired by his tweets (and his article published in EPMonthly) you perform a brief literature review on the topic of ‘balanced fluid’ resuscitation.
PICO Question:
Population: Adult patients receiving IV crystalloid (admitted patients, critically ill patients, patients with severe sepsis or septic shock)
Intervention: Balanced (chloride-restricted) crystalloids such as Lactated Ringer’s or Plasma-Lyte
Comparison: Normal Saline
Outcome: Mortality, renal failure, need for renal replacement therapy
Search Strategy:
Two recently published, highly publicized articles (Self 2018 and Semler 2018) were chosen for inclusion. In order to identify two additional articles, the previous journal club covering this topic (November 2015) was searched and the two most relevant articles chosen.
Bottom Line:
Normal saline has long been the “go to” fluid of choice for resuscitation in the ED for critically ill patients. However, the use of such “chloride rich” or “unbalanced” fluids has been controversial for decades, with many calling for the use of fluids that more closely resemble the tonicity of human blood. Aggressive resuscitation with isotonic saline has been shown to decrease serum pH, without affecting serum osmolality (Williams 1999), and has been suggested to increase the risk of renal dysfunction (Lobo 2014). The clinical significance of these and similar effects has been called into question over the last decade. We sought to evaluate the evidence for and against the use of balanced fluid resuscitation in ED patients, particularly those with severe sepsis or septic shock.
The first paper we reviewed was a retrospective before-and-after study conducted at a single ICU in Melbourne Australia (Yunos 2012). This study demonstrated an decreased risk of acute kidney injury (OR 0.52, 95% CI 0.37-0.75) and need for renal replacement therapy(OR 0.52 95% CI 0.35-0.76) with the use of balanced fluids. Unfortunately, this study was not only limited by its methodological design, but is not externally valid to our patient population, as only 22% of patients were admitted from the ED, half were post-operative, and nearly a third were admitted following elective surgery
A less methodologically robust, retrospective study was identified that at least enrolled patients more similar to those in our setting (Rhagunathan 2014). This study was conducted using a retrospective cohort of patients from 360 US ICUs with sepsis requiring vasopressor therapy. Unfortunately, as this was retrospective, the two treatment groups were unbalanced, and statistical methods had to be employed to balance the two cohorts. Patients receiving any amount of balanced fluid were propensity matched to patients receiving only unbalanced fluids during the same time period. Patients who received some balanced fluids saw a decrease in in-hospital mortality (RR 0.86, 95% CI 0.78-0.94; NNT 31) with no difference in AKI or need for dialysis. A dose-response relationship was also observed, in which the relative risk of in-hospital mortality was lowered an additional 3.4% on average for every 10% increase in in the proportion of balanced fluids received.
More recently, two large quasi-randomized studies looking at the use of balanced fluids were published out of Vanderbilt University Medical Center. The first of these (Self 2018) enrolled patients receiving at least 500 mL of intravenous isotonic crystalloid in the ED who were later admitted to a non-ICU bed (i.e. non-critically ill patients). Patients were "randomized" based on calendar month, alternating between saline and balanced cystalloids. There was no difference in the primary outcome (number of hospital-free days to day 28) between the two groups. There was small decrease in risk of the secondary outcome, major adverse renal events—a composite of doubling of creatinine from “baseline,” need for renal replacement therapy, and death—with an adjusted odds ratio of 0.98 (95% CI 0.92-1.04), a risk reduction of 0.9%, and a NNT of 111. This slight difference was entirely driven by the decreased risk of a doubling of the creatinine, with no actual difference in need for renal replacement therapy or death. In fact, the statistical significance achieved was also entirely due to the use of a composite outcome to increase the incidence of any outcome (thereby narrowing the 95% CI), with no actual statistically significant difference in the risk of doubling of creatinine when looked at in isolation (RR 0.86, 95% CI 0.73-1.01). It shouldn’t be surprising that no real difference in outcomes was observed in this study, given that these were relatively healthy patients receiving a rather small amount of fluid (median volume of ~ 1 liter during the entire hospitalization). The results are made even more suspect by the fact that over a third of patients did not have a baseline creatinine in the system for comparison, but rather had a baseline creatinine estimated based solely on age, race, and gender.
The second study out of Vanderbilt (Semler 2018) was similar in methodology, but enrolled only adult patients admitted to one of five participating ICUs. In this case, the primary outcome was the composite incidence of major adverse renal events, as defined for the previous study. The authors again found a small reduction in the incidence of major adverse renal events, with an adjusted OR of 0.90 (95% CI 0.82-0.99), a risk reduction of 1.1%, and a NNT of 91. As in the prior study, there was no statistically significant difference for any of the individual components of this composite outcome; by combining outcomes, the authors were able to increase the incidence and hence decrease the 95% CI, allowing them to achieve statistical significance. In this case, the difference was only observed after statistical adjustment for known confounder; when looking at unadjusted data, there was no statistically significance difference between the groups (RR 0.93, 95% CI 0.86-1.00). Interestingly, the median volume of fluid administer was about 1 liter, similar to the study conducted on non-critically ill patients, and it quite likely that a more pronounced effect would be seen in patients receiving a larger volume of fluid. In fact, a fairly large treatment effect was observed in subgroup analysis of patients with sepsis, with a risk difference of 5.1% (NNT ~20).
The bulk of this evidence suggests that when administered broadly, the use of saline versus balanced fluids does not have any real impact on meaningful outcomes. However, when larger volumes of fluid are administered (such as in patients with sepsis), there does seem to be a trend, at least, towards improved outcomes. Rather than continue to research the use of balanced fluids in non-critically ill patients or in all patients admitted to an ICU, regardless of medical condition, further research should attempt to confirm the apparent benefit in those patients likely to receive a larger volume of IV fluids. Likewise, despite the low cost and lack of harm associated with Lactated Ringer’s solution, it would be difficult to broadly recommend its use over normal saline, but rather to consider its use when two or more liters of fluid are expected to be given.
Steroids in Sepsis and Septic Shock
Jul 03, 2018
Journal Club Podcast #43: May 2018
A brief look at the evidence for and against the use of steroids in patients sepsis and septic shock...
You’re working the weekend shift in TCC when you get a page: triage patient to 3L for low BP. You meet the patient in the room and find a critically ill-appearing 55-year-old female with one week of cough and increased shortness of breath. Her vital signs are:
HR 125
BP 65/30
SpO2 89% on room air
RR 28
She is struggling to breathe, getting out 2 to 3 word sentences, and is oriented only to self. You immediately ask the nurses to get two large-bore IVs and hang two liters of normal saline on pressure bags while you prepare to intubate. Following intubation (during which you administer two boluses of phenylephrine, 100 mcg each, for dropping blood pressure), you get a stat portable chest x-ray showing multifocal pneumonia. After your initial two liters of fluid have been administered, followed by a third (and broad-spectrum antibiotics), the patient’s blood pressure is still only 80/45.
You place a right-sided internal jugular central line under ultrasound guidance and start a norepinephrine drip. Your deftly placed arterial line begins to demonstrate an improved BP and MAP and you find the patient a bed in the medical ICU. As the patient is being transferred, you begin to wonder whether steroids would be beneficial in this patient with clear septic shock. After all, you know that much of the problem in sepsis is the inflammatory response, which would theoretically be mitigated by steroid administration. You begin to search the literature and realize this has been a controversial topic dating back over 10 years...
PICO Question:
Population: Adults patients with either severe sepsis without septic shock of with septic shock.
Intervention: IV steroids, administered as an intermittent bolus or a continuous infusion.
Comparison: Placebo and standard of care.
Outcome: Mortality, resolution of septic shock, development of septic shock, ICU and hospital length of stay, duration of mechanical ventilation, need for renal replacement therapy.
Search Strategy:
PubMed was searched using the terms “steroids AND sepsis” and limiting the results to clinical trials (https://tinyurl.com/y9wswub5). This strategy yielded 363 studies, from which the four most relevant articles were chosen. The Cochrane Database of Systematic Reviews was also searched in an attempt to find a meta-analysis of results, but no such article was identified.
Bottom Line:
In 2002, a landmark study published in JAMA (Annane 2002) demonstrated a significant reduction in 28-day mortality with the use of low-dose steroids among patients with septic shock who did not respond appropriately to a corticotropin stimulation test (adjusted odds ratio [OR] 0.54, 95% CI 0.31-0.97). Despite adjusting outcomes for baseline characteristics in this randomized controlled trial (with no difference in outcomes looking at raw data), the authors concluded that steroids were beneficial in non-responders. Since this study was published, several studies have been conducted to reevaluate the effects of steroids on the course of septic shock. We looked at four such articles, including two high-impact articles published earlier this year.
In 2008, the CORTICUS trial was published as a follow-up to the initial 2002 study. This international, multicenter trial again evaluated the efficacy of steroids in both corticotropin non-responders and responders with septic shock, and found no difference in 28-day mortality among either group (relative risk [RR] 1.09, 95% CI 0.77 to 1.52 and RR 1.09, 95% CI 0.84-1.41, respectively). The authors therefore concluded that “hydrocortisone cannot be recommended as general adjuvant therapy for septic shock (vasopressor responsive), nor can corticotropin testing be recommended to determine which patients should receive hydrocortisone therapy.”
Given the different outcomes observed in these two early studies, further research has since been completed to attempt to obtain a more definitive answer. Earlier this year, two studies were published in the New England Journal of Medicine, again with differing results. The ADRENAL study enrolled 3800 patients with septic shock requiring mechanical ventilation, but did not test corticotropin responsiveness. Patients were randomized to either placebo or a continuous infusion of hydrocortisone for up to 7 days, and there was no difference in 90-day mortality between the groups (OR 0.95, 95% CI 0.82-1.10). Similar to the CORTICUS trial, they did find a faster time to resolution of shock in the hydrocortisone group (median 3 vs. 4 days; hazard ratio [HR] 1.32, 95% CI 1.23-1.41), but this again is of unclear importance to the patient.
The APPROCCHSS trial, published soon after the ADRENAL trial, again enrolled patients with septic shock, but was designed to test the efficacy of both combined hydrocortisone-fludrocortisone therapy AND drotrecogin alfa. As drotrecogin alfa was removed from the market several years into the study, an adjustment was made and the study ended with a two-parallel-group design. Unfortunately, due to this adjustment, the study was halted twice for prolonged periods, and hence was completed over a seven-year period, during which multiple changes in the management of sepsis occurred (see Journal Club July 2015, Journal Club October 2010). While this trial found a decrease in 90-day all-cause mortality with steroid administration (RR 0.88, 95% CI 0.78-0.99) its logistical issues make these results difficult to interpret.
While these studies had differing results, the APPROCCHSS study at least suggests that it is reasonable to administer stress-dose steroids to patients with sepsis and refractory shock, as in this study, only patients requiring vasopressors for at least 6 hours were eligible for enrollment. This is likely not a significant change from current management in many ICUs.
An additional study was identified that looked at the ability of steroids to prevent progression to shock in patient with severe sepsis (the HYPRESS trial). A total of 353 patients with sepsis and evidence of organ dysfunction without shock were enrolled and randomized to received placebo (mannitol) or a continuous infusion of hydrocortisone for 5 days, followed by a taper. There was no significant difference in rates of progression to shock between the two groups (absolute risk reduction [ARR] -1.8%, 95% CI -10.7% to 7.2%), or in 28-day, 90-day, 180-day, ICU, or in-hospital mortality. There was also no difference in ICU or hospital length of stay, need for mechanical ventilation, or need for renal replacement therapy. This final study suggests no benefit with the administration of steroids in patients with sepsis but without shock.
Critical Care Roundup
May 17, 2018
Journal Club Podcast #42: April 2018
A brief recap of four articles selected for review by the critical care folks...
Four articles relevant to the critical care medicine in the emergency department were selected by the critical care medicine section. No formal literature search was performed.
Bottom Line:
PGY-1
These two blinded, randomized controlled trials comparing dexmedetomidine with midazolam and propofol demonstrated noninferiority of dexmedetomidine with regards to the proportion of time spent at the desired level of sedation, with a decreased in duration of mechanical ventilation compared to midazolam, but no difference compared to propofol. Imbalances in dosing, resulting in lower levels of sedation among patients receiving dexmedetomidine compared to the standard drugs, and lack of objective criteria for weaning of mechanical ventilation and extubation suggest that there may be issues with both internal and external validity. Additionally, patient-centered outcomes and cost were not assessed in this study, nor was the incidence or degree of delirium.
PGY-2
This single-center, before and after study demonstrated a rather large reduction in morality among patients with severe sepsis and septic shock treated with IV vitamin C, hydrocortisone, and thiamine. The results of this study are quite profound, and hence should be confirmed with additional prospective, randomized controlled trials. If this intervention is truly this beneficial, and truly reduces mortality to less than 10% in this patient population, routine use of this therapy should be initiated immediately.
PGY-3
In this meta-analysis evaluating the diagnostic capability of serum procalcitonin in the differentiation of sepsis from non-infectious SIRS, the reported pooled sensitivity and specificity correspond to positive and negative likelihood ratios of 3.7 and 0.29, which will only result in small changes in disease probability. Therefore, caution will need to be exercised when interpreting test results. Interval likelihood ratios may provide more clinically useful information, but were not provided. If procalcitonin is to become a relevant aspect of sepsis care, additional research will need to identify a particular clinical role with an improvement in patient-oriented outcomes.
PGY-4
This study suggests that an ED protocol to screen elderly patients with functional decline who would benefit from palliative care or hospice is feasible, but highly cost-ineffective. Additional means of implementing this protocol in a way that does not involve hiring additional, full-time staff should be sought and studied.
Prevalence of PE in Syncope Patients
Apr 11, 2018
Journal Club Podcast #41: March 2018
A look at the PESIT trial, which suggests a very high rate of PE in syncope patients, followed by a look at the rest of the evidence...
You’re working a TCC shift with Dr. Cohn, sitting right beside you. He’s drinking a Diet Coke, having not offered you one. You decide to go see your next patient, a 78 year old female, complaining of “feeling woozy”. She endorses syncope, shortness of breath, and leg pain. She is saturating 89% on room air, tachycardic to 104, and BP 117/76. She has many other reasons other than a pulmonary embolism to be feeling this way, but the syncope has you thinking. You remember reading an article that was all the rage a few months ago regarding syncope as a presenting complaint for PE. It was fake news, you said. So vague. But here you are. You’ve got a minute, and Dr. Cohn by your side. You search the literature and gently fall into the rabbit hole…
PICO Question:
Population: Adult patients presenting to the ED with syncope and those patients who were admitted to the hospital for syncope.
Intervention: Routine testing for pulmonary embolism (PE)
Comparison: Standard of care with PE workup based on clinical gestalt
Outcome: Prevalence of PE among these patients, mortality, development of CTEPH, adverse reaction to IV contrast, cancer rates.
Search Strategy:
PubMED was searched using the terms "pulmonary embolism" AND prevalance AND syncope (https://tinyurl.com/y7x4gs2u). This resulted in 93 citations, from which 4 relevant articles were chosen.
Bottom Line:
In 2016, the PESIT study from Italy, published in the New England Journal of Medicine, demonstrated an extremely high prevalence of pulmonary embolism (PE) among patients admitted to the hospital for syncope (Prandoni et al). The 17.3% prevalence observed in this study was shocking, and several editorials attempted to rationalize these findings (EPMonthly, R.E.B.E.L.EM, NUEMBlog). As a result, several studies have been published in the interim attempting to either replicate or refute these findings. We looked not only at the PESIT study, but two of these additional studies and a meta-analysis of data in an attempt to place these results in a broader context.
Two retrospective studies were identified, one involving patients from five separate longitudinal administrative databases from Canada, Denmark, Italy, and the United States (Costantino 2018), the other involving patients prospectively enrolled in a syncope database at the University of Utah Hospital (Frizell 2018). Both studies failed to demonstrate such a high prevalence of PE. In the former study, the rate of PE diagnosis among all ED patients ranged from 0.06% to 0.55% in the different databases, while the rate among hospitalized patients ranged from 0.15% to 2.10%. In the latter study, the prevalence of PE among all ED patients with syncope was 0.6%, while the rate among admitted patients was 2.3% (including 2 patients presumably diagnosed with PE within 30 days after hospital discharge, both of whom had a negative CT scan for PE while in the ED).
The meta-analysis we reviewed (Oqab 2017) included 9 studies involving 6608 ED patients and 3 studies involving 975 admitted patients, and demonstrated a similarly low prevalence of PE. The prevalence among all ED patients was 0.8%, and the prevalence among patients hospitalized for syncope was 1.0%. Interestingly, this study specifically excluded the PESIT trial from its meta-analysis. Inclusion of the PESIT study would likely increase the pooled prevalence, though likely not by a significant amount.
The PESIT study itself is the only prospective study of its kind in which all patients being admitted to the hospital for syncope were evaluated for PE. They used an algorithm in which a simplified Well's score for PE was calculated and a d-dimer was drawn in all patients. Those with a low-risk Well’s score and a negative D-dimer underwent no further testing, while anyone with a high-risk Well’s score or a positive D-dimer underwent either CT pulmonary angiography (CTPA) or ventilation-perfusion testing (V/Q). Interestingly, the enrolled population seems to be a very high-risk cohort. Nearly 11% of patients had cancer, 7% had recent prolonged immobility, and 5% had recent trauma or surgery. Of those patients diagnosed with PE, 45% were tachypneic, a third were tachycardic, a third were hypotensive, and 40% had clinical signs of DVT.
The authors note that 24 patients diagnosed with PE had no clinical manifestations of the diagnosis, including tachypnea, tachycardia, hypotension, or clinical signs or symptoms of DVT. Excluding patients with signs concerning for thromboembolic disease, the rate of PE among remaining patients was only around 5%. If you further excluded patients at high risk of PE (i.e. those with cancer, recent immobilization, or recent surgery) this number would likely be even lower. Previous calculations have estimated the test threshold for PE as low as 1.8% (Kline 2004), or as high as 5.5%. Based on this study, even excluding patients with obvious signs of PE or DVT, the prevalence of disease is still likely above the test threshold.
However, all of the other evidence suggests that the actual prevalence of PE among patients in the ED, or even being admitted to the hospital, is much lower, and is very likely to be below this threshold. For now, PE should certainly be considered in the differential for any patient presenting with syncope, but routine testing is not likely to benefit patients. Rather, patients with clinical signs or symptoms concern for PE, or significant risk factors, should undergo risk stratification via the PERC rule, modified Well’s score, or Geneva score, with additional testing based on pre-test probability.
Amiodarone for...well...everything
Mar 15, 2018
Journal Club Podcast #40: January 2018
A little discussion about the evidence for amiodarone in atrial fibrillation, stable V-tach, and shock-refractory VF/VT in cardiac arrest...
Working in TCC can be draining, and on one particularly busy afternoon, you begin to suspect your own sanity. After back-to-back cardiac arrest patients, you wonder if perhaps you should have done something less stressful with your life, like maybe become a lobster boat captain or an ice road trucker.
Your first code is a middle-aged female who suffered cardiac arrest while watching Alabama beat Clemson during the Sugar Bowl (yeah...that's right!!!). She was initially noted to be in ventricular fibrillation (VF), and remained so after three rounds of defibrillation. As she arrives in the trauma room, you immediately continue CPR and set up to shock her again. Your attending orders amiodarone, and even though you remember hearing that this may not be very effective, you realize it's not the time to argue. The patient ends up with ROSC and goes to the cath lab when her ECG reveals an anterior STEMI.
Your second patient is an elderly male who was initially found to be in PEA by EMS, but then later developed fine VF after three rounds of epinephrine en route. You shock him three times in the trauma room without effect, and once again your attending calls for amiodarone. After twenty more minutes of CPR, the patient reverts to asystole and the code is soon called.
You end up giving amiodarone twice more in your shift, once to a patient with new-onset a-fib who ends up getting admitted after not converting to a sinus rhythm, and later to a patient with stable, wide-complex tachycardia (which you're pretty sure was ventricular tachycardia [VT]), and you start to wonder if your attending owns stock in the company that makes it. You search online after your shift and find an excellent rundown on the limits of amiodarone on RebelEM, which prompts you to perform your own literature search.
PICO Question:
Population: Adults patients with either new-onset atrial fibrillation, hemodynamically stable ventricular tachycardia (VT), or cardiac arrest due to ventricular fibrillation (VF) or VT
Intervention: IV Amiodarone
Comparison: Placebo or any alternative antiarrhythmic
Outcome: Conversion to normal sinus rhythm, need for hospital admission, survival, functional neurologic outcome, hypotension.
Amiodarone, which was first approved by the FDA in 1985, became a mainstay of arrhythmia management after being added to the ACLS guidelines in 2000. At that time, amiodarone was recommended ahead of lidocaine for management of hemodynamically stable wide-complex tachycardia but was only included as a consideration for refractory VF and pulseless VT. In the 2010 update, amiodarone was recommended as a “first line anti arrhythmic agent” in refractory VF/pulseless VT, based on limited evidence for improved rates of ROSC and hospital admission. In addition, amiodarone been recommended for use in recent-onset AF for over twenty years (Hou 1995). Given the rise in prominence of procainamide use in AF (see Ottawa Aggressive Protocol by Stiell et al), and an increased focus on longer term outcomes in cardiac arrest, we decided to review evidence for a variety of amiodarone indications frequently seen in the ED.
Stable Ventricular Tachycardia
PROCAMIO, a small, multicenter randomized controlled trial conducted at several hospitals in Spain enrolled 74 patients with hemodynamically stable, wide-complex tachycardia and randomized them to receive either IV amiodarone or IV procainamide over twenty minutes. Major cardiac events (clinical signs of hypoperfusion, dyspnea, hypotension, or acceleration of heart rate) occurred less frequently among patients receiving procainamide (OR 0.1; 95% CI 0.03 to 0.6). These patients also had a much higher rate of cardioversion (OR 3.3; 95% CI 1.2 to 9.3). Unfortunately, this was a very small study in which only a fifth of the number of planned patients was actually enrolled. Despite this limitation, it seems reasonable to use procainamide as a first line agent for hemodynamically stable wide-complex tachycardia rather than amiodarone.
Recent Onset Atrial Fibrillation
Procainamide has been used successfully in the management of recent-onset AF, with a previously documented conversion rate of around 60%, occurring at a median of 3 hours following drug infusion (Stiell 2010). In one systematic review and meta-analysis comparing IV amiodarone to placebo and class Ic antiarrhythmics (Chevalier 2003), amiodarone did not have a significantly higher rate of cardioversion compared to placebo at 1 to 2 hours, but did have a higher rate at 6 to 8 and 24 hours. Compared to class Ic antiarrhythmics, amiodarone was less effective at 1 to 2 and 6 to 8 hours, but had similar efficacy at 24 hours. Cardioversion with amiodarone by 6 to 8 hours occurred in 48-62% patients (depending on the individual study), which is fairly comparable to previously reported rates for procainamide. Given the lack of studies comparing procainamide to amiodarone head-to-head, it seems reasonable to consider either drug, though concerns regarding hypotension with amiodarone may sway many to use procainamide instead.
Refractory VF or Pulseless VT
A recent multicenter, randomized controlled trial conducted at 55 EMS services in North America sought to compare the effectiveness of amiodarone, lidocaine, and placebo in patients with out of hospital cardiac arrest (OHCA) due to refractory VF/pulseless VT. After excluding patients whose initial rhythm was not VF or VT, 3026 patients were enrolled and evenly split between groups. The authors found no significant difference in survival to hospital discharge between patients receiving amiodarone and placebo (ARR 3.2%; 95% CI -0.4% to 7.0%) and no difference between those receiving lidocaine and placebo (ARR 2.6%; 95% CI -1.0% to 6.3%). Unfortunately, despite the large number of patients enrolled, the outcome was fairly rare, which resulted in relatively wide 95% CIs. As a result, a potentially clinically meaningful survival improvement (3.2% for amiodarone and 2.6% for lidocaine) could not be shown to be statistically significant.
In a follow-up study, the authors of the previous study also looked at those patients initially enrolled but excluded because VF/pulseless VT was not their initial rhythm (i.e. those patients with an initially non-shockable rhythm). Again, they did not observe any statistically significant difference in survival to discharge between the three groups (1.9% for the placebo group, 3.1% for the lidocaine group, and 4.1% for the amiodarone group). Also, despite not finding a statistically significant difference, the study was not sufficiently powered to detect a potentially clinically significant improvement in mortality of 2% with amiodarone. Given this limitation for the last two studies, and the lack of any downside in this subset of patients, it seems reasonable to continue amiodarone use for patients with refractory VF/pulseless VT for both OHCA and in-hospital arrest.
NG Lavage for GI Bleeds
Dec 08, 2017
Journal Club Podcast #39: October 2017
A discussion of the controversial practice of shoving a tube down someone's nose and sucking out stomach contents...
It is a typical Saturday afternoon in TCC when you are roomed a patient with a history of COPD and GERD who appears short of breath. When they bring him back from triage in a wheelchair, you watch him shakily stand and transition to the stretcher, appearing dizzy when he does so. As you go in to greet the patient, you ask the nurse to get him started on the monitor. You can hear wheezing on your exam.
HR 116 RR 24 BP 88/54 Sat 100% Temp 36.3
During the review of systems, your patients embarrassedly admits to having several large bowel movements over the past two days that appeared dark and have progressively gotten more tarry throughout the day. He has had smaller episodes in the past, but nothing so large or frequent. He denies any vomiting, and specifically has had no hematemesis. DRE reveals black, heme positive stool.
His respiratory rate improves a little with a nebulizer, but he still appears short of breath. His labs are significant for a hemoglobin of 6.8, so you initiate your resuscitation. The patient’s vital signs improve slightly but do not normalize. When you hear back from your GI consult, he asks you to admit the patient for serial CBCs overnight after performing a nasogastric lavage.
However, you remember reading an editorial about NG lavage in evaluating GI bleeds in patients without hematemesis that was not entirely flattering. On top of that, you are concerned about worsening your patient’s already existing respiratory distress (or risk aspiration) if it is unnecessary. You decide to perform a quick search to see if there are any articles that might help you weight the pros and cons...
PICO Question:
Population: Adult patients with potential upper GI bleeding
Intervention: NG tube lavage
Comparison: No NG tube placement
Outcome: Mortality, transfusion requirement, need for surgery, gastric visualization at endoscopy, triage of patients
Search Strategy:
No formal search strategy was used. Two emergency medicine residents used multiple sources to identify articles that evaluated the use of NG lavage in potential upper GI bleeding with regards to any outcome.
Bottom Line
Considered to be one of the more painful procedures performed in the ED, nasogastric tube (NG) insertion is also associated with complication rates of 0.3% to 0.8% (Pillai 2005). NG tubes are often placed in patients with a known or suspected upper GI bleed, with potential goals of determining if the source is upper or lower, improving endoscopic visualization of the gastric fundus by lavage, and potentially triaging patients to urgent vs. non urgent endoscopy (particularly off hours). Unfortunately, very little evidence exists to support routine NG tube placement or lavage in these patients. We therefore sought to broadly evaluate the potential benefits of NG tube placement in patients with suspected or known upper GI bleeds.
One of the earlier studies performed sought to evaluate the predictive ability of NG lavage (NGL) in predicting the presence of a high risk lesion (spurting, oozing of blood, or a visible non-bleeding vessel) at endoscopy (Aljebreen 2005). A total of 520 patients with known upper GI bleeding were enrolled from the Canadian Registry of patients with Upper Gastrointestinal Bleeding undergoing Endoscopy (RUGBE). When considering a bloody NGL as a positive test (and coffee-ground, clear, or “other” aspirate as negative), the positive likelihood ratio (LR) was 2.00 and the negative LR was 0.68. When bloody or coffee-ground NGL was considered positive, the positive LR decreased to 1.20 and the negative LR decreased to 0.63. Overall, these likelihood ratios are quite poor, and would do very little to alter the probability of the disease, no matter the results.
Another study, conducted in Paris, France, compared NGL and erythromycin in terms of ability to clear the stomach and improve gastric visualization during endoscopy (Pateron 2011). In this randomized, controlled trial, 253 patients were randomized to either NGL until clear, a dose of IV erythromycin, or both NGL and erythromycin. The mean visualization score at the time of endoscopy was similar between all 3 groups, with no difference in duration of endoscopy, need for hemostasis, ability to identify the source of bleeding, or need for a second endoscopy.
A third study, undertaken in the West Los Angeles VA system, attempted to compare outcomes between patients with GI bleed who underwent NGL and those who did not (Huang 2011). This retrospective study included 632 patients, of whom 378 patients underwent NGL. The authors used propensity score matching to try and achieve prognostic balance between the two groups by balancing for several known confounding factors. Following propensity matching, two groups with 193 patients in each were compared. There was no significant difference in mortality (OR 0.84, 95% CI 0.37 to 1.92), mean length of stay (difference 0.80 days, 95% CI -1.4 to 3.0), need for emergency surgery (OR 1.51, 95% CI 0.42 to 5.43), or mean blood transfusion requirement (difference -0.18 units, 95% CI -0.98 to 0.62). Of note, significantly more patients in the NGL group underwent endoscopy (OR 1.71, 95% CI 1.12 to 2.62) and were more likely to undergo earlier endoscopy, suggesting that significant imbalance remained between the groups in spite of propensity matching. This finding significantly limits the internal validity of the study.
Finally, a systematic review was identified from 2010 that attempted to determine the accuracy of NGL in differentiating upper from lower GI bleeds in patients with hematochezia or melena without hematemesis (Palamidessi 2010). The authors identified 3 articles; unfortunately, one of these (Aljebreen 2005) did not actually address the question being asked, as it only included patients with upper GI bleeds. For the remaining two article, positive LRs were 4.74 and 4.44, while negative LRs were 0.2 and 0.65. As before, these likelihood ratios (with the exception of the negative LR of 0.2) suggest that the results of the test would do little to change the probability of an upper (or lower) GI hemorrhage.
In all, there is very little evidence to support the routine use of NG lavage in patients presenting to the ED with suspected upper GI hemorrhage. The only potential benefit not explored was the triage of patients to emergent, urgent, or non-urgent endoscopy, which would likely only be helpful during off hours. Given the poor ability of lavage to identify patients with high risk lesions, and given the significant discomfort associated with the procedure, this seems like a fairly soft reason to place an NG tube. Additional factors associated with poor outcomes in upper GI hemorrhage, such as older age, presence of upper GI malignancy, and variceal disease (Roberts 2012), in addition to signs of clinical instability may provide better triage of these patients, and should be considered prior to NG tube placement.
Contrast-Induced Nephropathy: Myth or Monster
Sep 30, 2017
Journal Club Podcast #38: August 2017
A brief discussion on contrast-induced nephropathy and some of the evidence that suggests it may be as real as Sasquatch himself...
It was a clear black night, a clear white moon...and you're stuck working in EM-2 instead of out regulatin'! One of your patients is Ms. Z, a 52-year old woman with left lower quadrant abdominal pain. She's quite tender and has some localized guarding, but no rebound. Her WBC is 14.5. You're worried about diverticulitis, possibly with rupture and an abscess, and would like to get a CT scan, but her creatinine is 1.7, which is her baseline. Additionally, she has a history of diabetes and hypertension, and you worry about causing contrast-induced nephropathy (CIN) if you give contrast for the CT.
Your attending assures you that there's no such a thing as CIN, that it's as made up as Santa Claus and the Easter Bunny (EmLitofNote: The Latest Myth: Contrast-Induced Nephropathy; EMCrit: Do CT Scans Cause Contrast Nephropathy). As you fight back the tears, your childhood fantasies destroyed, you call the radiologist to discuss what to do. The radiologist shares your concerns and suggests that in this "not overly skinny" woman, contrast shouldn't be necessary.
Lo and behold, the CT shows uncomplicated diverticulitis, and Ms. Z goes home on oral antibiotics, her remaining nephrons safe and secure. But as you end your shift, eyes heavy with fatigue, you wonder: was your attending right about CIN (and the poor little Easter Bunny), or was the radiologist right to be concerned? You head home, crash, wake up refreshed, and begin to search the literature…
Intervention: Administration of intravenous contrast for enhancement of CT scan
Comparison: No contrast administration for enhancement of CT scan
Outcome: Acute kidney injury, chronic kidney disease, need for dialysis, mortality
Search Strategy:
An article published in a recent issue of Annals of Emergency Medicine (Hinson 2017) was chosen as the impetus for this journal club. A meta-analysis referenced in this article, along with two primary research studies, were chosen for inclusion as well.
Bottom Line:
Iodinated contrast media was once cited as the third most common cause of iatrogenic acute kidney injury (Hou 1983). Previous research on the incidence of contrast-induced nephropathy (CIN) associated with intravenous contrast for CT scans in the ED has found the rate to be around 11%, with much lower rates of severe renal failure (1%) and death due to renal failure (0.6%) (McDonald 2014). Other studies have reported similar rates (Mitchell 2007, Mitchell 2012).
The problem with these cohort studies is that while they demonstrate the incidence of AKI in patients receiving IV contrast, they do not necessarily establish contrast as the cause of the AKI. Patients receiving IV contrast typically have some issue requiring them to undergo CT scanning or angiographgy, and some percentage of these patients would develop AKI independent of contrast administration. As a result, several observational studies have been undertaken to compare the incidence of AKI and other outcomes in patients receiving contrast to the incidence in patients not receiving contrast.
While some older studies have demonstrated an increased incidence of AKI among patients receiving IV contrast when compared with controls (Heller 1991, Polena 2005) these studies have failed to control for potential confounders. Studies that have controlled for such confounder, typically using propensity score matching, have found no increased incidence of AKI, severe kidney failure, or death due to renal failure compared to patients not receiving contrast (McDonald 2014, Hinson 2017). A meta-analysis of all such studies, performed in 2013, similarly failed to demonstrate a statistically significant increase in the incidence of AKI (McDonald 2013).
While the bulk of data thus far does not suggest a clear association between IV contrast administration and acute kidney injury (AKA CIN), no randomized controlled studies have performed up to this point. While more recent studies have used methods such as propensity matching to help control for known confounding factors, these studies are not able to control for unknown confounders, and similarly have not controlled for potentially renal protective interventions undertaken after contrast administration (e.g. IV fluid administration and bicarbonate administration, withholding of potentially nephrotoxic drugs). It would therefore be difficult to advocate for a change in clinical practice without such randomized controlled trials.
There have been several barriers to performing such studies, including the assumption that IV contrast is harmful. The current evidence may help break that barrier, establishing that there is clinical equipoise regarding this issue, but other issues remain. Perhaps the most significant is the potential harm in withholding IV contrast in patients undergoing CT who would benefit from contrast enhancement, making it unethical to randomize patients to a non-contrast arm in such a study. Unfortunately, until further evidence is available, it seems prudent to consider withholding IV contrast in patients felt to be at high risk of developing AKI, with the caveat that in some emergent cases (i.e. possible aortic dissection), the risks of withholding contrast may outweigh the risks of developing kidney injury.
Controversies in the Diagnosis and Management of Cellulitis
Aug 02, 2017
Journal Club Podcast #37: July 2017
A brief discussion of several of the controversies that have popped up in the last few years regarding the diagnosis and management of cellulitis...
You're moonlighting in a local ED one afternoon, when you encounter Mrs. X, a 40-year-old woman with rheumatoid arthritis, for which she takes Methotrexate. She was gardening three days prior to presentation when she suffered a small cut to her left ankle from a misplaced spade. The following day, there was some mild erythema around the wound, which has progressed. She now has redness, warmth, and mild swelling to the lateral ankle and distal calf, with no signs of lymphangitis and no fluctuance. The ankle joint moves easily and without pain. As she is afebrile and well appearing, you discuss with her PMD and send her out on Bactrim and Keflex, to cover both Strep species and MRSA.
The very next patient you meet is Mr. Y, a 50-year-old obese male with CHF. He has had swelling in both his legs for quite some time, chalked up in the past to chronic lymphedema and CHF, but now has redness and pain to both ankles and lower legs. Given the severity of the redness and swelling, you elect to treat the patient for cellulitis and order vancomycin, then place an admission order. The hospitalist muses that perhaps the patient has venous stasis dermatitis, but admits that it's probably worth treating for potential cellulitis.
Thinking back to both patients later in the day, you begin to worry about your treatment plans. Should the immunosuppressed woman have been admitted for her cellulitis? What factors make patients more prone to treatment failure? Do you always need to prescribe both Bactrim and Keflex for cellulitis (see IDSA guidelines for SSTIs)? And finally, could the second patient have had stasis dermatitis, and if so, did he really need antibiotics and admission? You decide to look into the evidence to try to answer these questions, and dive right into the literature...
PICO Question:
Given the nature of the journal club this month, no specific PICO questions was devised. Instead, we looked at several controversial issues surrounding the management of cellulitis, including diagnostic accuracy, antibiotic selection, risk factors for treatment failure, and prescribing practices.
Search Strategy:
Again, due to the nature of the journal club, no specific search strategy was undertaken. Recent high-impact articles were selected from the medical literature, some due to their highly controversial nature.
Bottom Line:
Cellulitis, a common skin infection, results in around 2.3 million ED visits in the US annually. This number has risen over the years with the increasing prevalence of community-acquired MRSA (CA-MRSA) (Pallin 2008). Despite these rising numbers, there remains significant controversy regarding the diagnosis and management of this common condition, in part due to the lack of objective diagnostic criteria, the presence of several hard to distinguish mimics (Weng 2016), and difficulties in determining the bacterial etiology in the majority of cases (Jeng 2010).
The most recent guidelines from the Infectious Diseases Society of America (IDSA) do not recommend adding MRSA coverage for the management of mild or moderate non-purulent skin and soft-tissue infections (i.e. cellulitis and erysipelas). The PGY-4 paper (Moran 2017) found that among patients treated as an outpatient for cellulitis, cephalexin alone resulted in similar cure rates to cephalexin plus trimethroprim-sulfamethoxazole, supporting the IDSA recommendations. However, it should be noted that this recommendation does not apply to patients with fever or leukocytosis, or in immunocompromised patients. In our PGY-2 paper (Pallin 2014), the authors determined, among other things, that 63% of patients with cellulitis were given antibiotic regimens that included CA-MRSA coverage. Unfortunately, they did not attempt to determine how many of these patients had criteria that would exclude them from the IDSA recommendation, but instead insinuate that nearly all of them were being treated inappropriately. They even go so far as to recommend using this as a reported quality measure for Medicare’s Physician Quality Reporting System, a suggestion that is both premature and potentially dangerous.
Our PGY-3 article (Weng 2016) went a step further, attempting to determine the costs associated with misdiagnosis of lower extremity cellulitis in the US. They report that 30.5% of patients admitted to the hospital with lower extremity cellulitis in their study were misdiagnosed, and that the majority of these patients did not require hospital admission. Using a literature review, they therefore determined that such misdiagnoses cost between $195 and $515 million dollars annually throughout the US. Unfortunately, all of these conclusion are based on a highly methodologically flawed retrospective study in which final diagnosis was determined by chart review out to thirty days post-discharge. It is quite likely that the retrospective conclusion of misdiagnosis was, in many cases, itself a misdiagnosis. Additionally, the authors offer no direction on how to avoid such proposed misdiagnosis, failing to consider the amount of data available 30 days after presentation that would not be available to the ED physician at the time of presentation (e.g. response to treatment), and fail to note that among misdiagnosed patients who were deemed not to require hospital admission at all (determined retrospectively by dermatologists), the mean length of stay was over 4 days! This information suggests that either these patients did, in fact, need to be admitted, or that the ability to differentiate cellulitis from “pseudocellulitis” did not become evident until several days of observation had passed. An editorial written in response to this review notes many of these issues, but also calls for improved diagnostic capabilities and discussion between the ED and admitting physicians (Moran 2017), which seems more than reasonable.
Our PGY-1 paper (Peterson 2014) found that fever (odds ratio [OR] 4.3), chronic leg ulcers (OR 2.5), chronic edema or lymphedema (OR 2.5), prior cellulitis in the same area (OR 2.1), and cellulitis at a wound site (OR 1.9) were all predictors failure of outpatient management of cellulitis.
All of this evidence suggests that cellulitis can be a difficult diagnosis fraught with controversy. Care should be taken when diagnosing lower extremity cellulitis, as there are many mimics that do not require antibiotics. Care should also be taken in those patients with risk factors for failed outpatient therapy, with close follow-up and good return precautions given to such patients. Additionally, improved adherence to current IDSA guidelines would likely result in use of fewer antibiotics with fewer adverse effects.
Diagnosis of Atraumatic Subarachnoid Hemorrhage
Jul 17, 2017
Journal Club Podcast #36: June 2017
Dr. Chris Carpenter and Marco Sivilotti bridge the US-Canada border to bring us a new podcast on the diagnostic approach to atraumatic SAH...
Click Tabs Below to Expand
Articles:
Article 1: Sensitivity of early brain computed tomography to exclude aneurysmal subarachnoid hemorrhage: A systematic review and meta-analysis, Stroke 2016; 47: 750-755). (http://pmid.us/26797666) Answer Key.
Article 2: False-negative interpretations of cranial computed tomography in aneurysmal subarachnoid hemorrhage, Acad Emerg Med 2016; 23: 591-598. (http://pmid.us/26918885) Answer Key.
Article 3: Spontaneous subarachnoid hemorrhage: A systematic review and meta-analysis describing the diagnostic accuracy of history, physical examination, imaging, and lumbar puncture with an exploration of test thresholds, Acad Emerg Med 2016; 23: 963-1003. (http://pmid.us/27306497) Answer Key.
Article 4: Determination of a testing threshold for lumbar puncture in the diagnosis of subarachnoid hemorrhage after a negative head computed tomography: A decision analysis, Acad Emerg Med 2016; 23: 1119-1127. (http://pmid.us/27378053) Answer Key.
Vignette:
Mrs. Z. is a healthy 30-year-old female who presents to your emergency department 2-hours after onset of “the worst headache of my life” which peaked within 1 minute of onset but was not “thunderclap”. She describes the headache as diffuse with associated nausea, but no photophobia, neck stiffness, vomiting, focal motor-sensory deficits, or fever. She reports no recent viral or febrile illness, blunt trauma, travel history, or sick contacts. Nobody else at home or work has had a headache recently. Her immunizations are up-to-date and she denies any significant PMH including no history of migraine (or other) headache disorders, cerebral aneurysm, stroke, HIV or other immunocompromising disorder, pseudotumor cerebri, malignancy, bleeding diatheses, or meningitis. She denies recent alcohol use or any history of intravenous or illicit substance abuse. She has not tried any medications for her headache. She is a secretary and lives at home with her husband and 3 year old child.
In the ED, her vitals are BP 106/55, P 58, RR 18, T 37.2°C, and 100% oxygen saturation on room air. She is in no acute distress, and her exam is unremarkable.
As you consider the diagnosis of subarachnoid hemorrhage and await her CT, you weigh the value of a post-CT lumbar puncture. You recall a recent meta-analysis discussed on your favorite “podcast” Emergency Medical Abstracts (listen here) as well as an episode of McMaster’s Textbook of Internal Medicine (watch here) and wonder how other researchers and specialties interpret the best evidence. Realizing that there are two distinct questions for the ED diagnosis of SAH, you develop two PICO questions.
PICO Question:
PICO Question #1
Population: ED patients with sudden onset severe headache concerning for aneurysmal SAH (aSAH)
Intervention: Computed tomography within 6 hours of symptom onset
Comparison: No comparator
Outcome: CT sensitivity, specificity, positive/negative likelihood ratio for aSAH
PICO Question #2
Population: ED patients with sudden onset severe headache concerning for aneurysmal SAH (aSAH)
Intervention: Lumbar puncture following unremarkable CT
Comparison:No comparator
Outcome: LP sensitivity, specificity, positive/negative likelihood ratio for aSAH
Search Strategy:
Using these PICO questions, you devise a PubMed search using Clinical Queries (diagnosis/broad) and the search term “aneurysmal subarachnoid hemorrhage” which identifies 2891 studies (see http://tinyurl.com/zrvgc3s) from which you select the 4 studies below from last year.
Bottom Line:
Needle in a Haystack
Headaches represent about 2% of emergency department (ED) visits annually. While severe headache patients presenting with altered mental status, fevers, or associated trauma usually generate sufficient concern to justify further diagnostic evaluations, most other sudden onset headache cases ultimately prove to result from benign, non-life threatening causes like migraine headache. The myriad causes of sudden onset headaches include cough, exertion, and post-coital, but can also include potentially life-threatening conditions like sinus thrombosis, vascular dissection, intracerebral hemorrhage, vasospasm, and aneurysmal subarachnoid hemorrhage. [Landtblom 2002, Delasobera 2012, de Bruijn 1996, Pascual 1996, Dodick 1999] Observational studies indicate that migraine headaches are at least 50-times more common than SAH amongst ED headache patients, so SAH represents a needle in a haystack for a very common chief complaint. [Edlow 2003] Missed SAH diagnosis occurs between 12%-53% of cases with ED providers estimated to miss 5% of them. [Edlow 2000, Vermeulen 2007]
Up to 80% of SAH cases result from a ruptured cerebral aneurysm. Other causes of SAH include low-pressure perimesencephalic bleeds, cerebral amyloid angiopathy, vasculitis, sickle cell disease, and cocaine or amphetamine abuse. [Carpenter 2016] One-fourth of aneurysmal SAH victims die within one-day and 50% of SAH survivors never return to work. Correctly identifying SAH early can reduce these adverse outcomes if subsequent neurosurgical interventions (coiling or clipping) occur emergently. [Schievink 1997] Therefore, the possibility of aneurysmal SAH must be considered in ED patients presenting with severe headaches. Understanding the SAH diagnostic evidence available for bedside evaluation, advanced imaging, and the role for lumbar puncture (LP) is therefore essential and the landscape is shifting.
Non-Contrast Cranial CT
Computed tomography (CT) became widely available for the evaluation of headache patients about 30 years ago. Early CTs were 4-slice and radiologists’ interpretative learning curves were steep. These early CTs were imperfect (sensitivity ~90%) for identifying small amounts of blood in the subarachnoid space, so textbooks, guidelines, and several generations of emergency medicine trainees advised against a CT-only approach to rule-out aneurysmal SAH. Instead, SAH could only be ruled out when a negative CT was followed immediately by a LP demonstrating cerebrospinal fluid (CSF) without either red blood cells or (at least 12 hours post-headache onset) xanthochromia. However, in 2017 there are three problems with that approach: (1) contemporary CTs are far better at identifying blood in the subarachnoid space; (2) LP’s frequently identify blood that is not in the subarachnoid space (traumatic LPs); and (3) CSF xanthochromia is not an accurate diagnostic test for SAH.
A rule of thumb to define sufficient diagnostic accuracy to rule-in a disease as positive is likelihood ratio (LR+) > 10 or negative likelihood ratio (LR-) < 0.1 to rule-out a disease. Both Perry 2011 and Backes 2012 demonstrate acceptably safe CT accuracy for SAH when imaging is obtained within 6-hours of headache onset (summary LR+ 235, summary LR- 0.01). Beyond 6-hours CT is still accurate to rule-in SAH (summary LR+ 223) but less impressive to rule out SAH (summary LR- 0.07, 95% CI 0.01-0.61). These studies used neuro-radiologists to interpret the CT, which opens the question as to whether general radiologists’ accuracy would equate these studies. Recent studies indicate that general radiologists’ are just as accurate for the diagnosis of SAH as neuro-radiologists.
Post-CT LP– Quantifying Benefits and Harms
Most European hospitals use spectrophotometry to evaluate the presence or absence of xanthochromia, whereas 99% of North American hospitals use visible inspection. Recent meta-analyses indicate that visible xanthochromia can rule-in SAH (LR+ 25) but is less accurate to rule it out (LR- 0.22). At least four spectrophotometric methods to evaluate xanthochromia exist and most ED headache studies report different methods, but spectrophometry using any method doesn’t appear to be significantly better than visible xanthochromia to rule-in or rule-out SAH. [Carpenter 2016, Chu 2014] At a threshold of 2000 x 106 red blood cells per liter in the final tube of CSF collected, the LR+ is 10.3 and the LR- is 0.07 (95% CI 0.01-0.49). [Perry 2015] Importantly, up to 30% of patients experience worsening post-LP headache, in addition to the risks of post-LP back pain, epidural bleeding, and introduction of skin flora into the central nervous system. [Seupaul 2005] Patients are not the only reluctant participants in this exercise; physician barriers to routine post-CT LP include inadequate time in the busy ED and expectation of normal or non-diagnostic results in most patients (low diagnostic yield). In fact, up to 1 in 6 LPs are traumatic (blood detected from skin or superficial soft tissue, not from the subarachnoid space). Consequently, LP is not performed in over half of acute headache patients in whom a CT is obtained. [Perry 2010, Perry 2013]
The ultimate objective is not to understand test accuracy; instead, the goal is to deliver the appropriate care to the right patients. How is a busy ED physician supposed to interpret all of these numbers and communicate with patients meaningfully to allow shared decision making? One approach to balance the harms and benefits of post-CT LP is to hypothesize test- and treatment-thresholds. The test-threshold describes the probability of a diagnosis (aneurysmal SAH) below which continuing to test for the diagnosis will harm more patients than it will help, whereas above the threshold additional testing will benefit more patients than will be harmed. This threshold is derived from the test accuracy (CSF xanthochromia or RBC), risk of the test (post-LP headache, infection, or epidural bleed), benefit of the treatment for those with disease, and harms of the treatment for those without disease (since false-positives will receive the treatment and have no possibility of benefit since they don’t actually have the disease). Based upon one recent diagnostic meta-analysis, the threshold at which post-CT LP would benefit patients is quite narrow (2-4% for CSF RBC or 2%-7% for visible xanthochromia). Doubling the risk of benefit and halving the risk of harm was used to further evaluate these thresholds and the thresholds didn’t change significantly. Still not convinced? Thinking about the benefit of LP alone (and neglecting the potential for harms associated with LP), the Number Needed to LP (NNLP) to identify one CNS infection in acute onset headache patients is 227 [Brunell 2013] while the NNLP to identify one additional aneurysmal SAH upon which neurosurgical intervention occurs and which was missed by CT within 6 hours ranges from 250 [Perry 2011, Sayer 2015] to 15200! [Blok 2015]
Journal Club Discussion Between Neurosurgery & Emergency Medicine
Here is the synopsis of the Journal Club discussion between Neurosurgery and Emergency Medicine, including Dr. Sivilotti the senior author of the PGY-III meta-analysis.
CT within 6 hours of thunderclap headache onset is sufficiently sensitive to rule out SAH without LP, if the CT is 16 slice or greater with cuts < 5mm.
Non-contrast head CT is an excellent test for ED headache patients in whom SAH is a concern, but like every other test that exists it is not 100% sensitive. Scenarios in which false negative CTs occur include imaging more than 6 hours after the onset of the headache, less severe bleeds (as measured by Hunt-Hess classification), anemia (Hg <10), and non-neuro-radiologists readers. In those scenarios, patient shared decision making about additional testing (CTA or LP) is warranted but should include informed consent about anticipated benefits and potential harms.
Less than one in 10 headache patients concerning for SAH are ultimately diagnosed with SAH in recent studies. While certain symptoms and signs increase or decrease the likelihood of SAH, no single characteristic on history/physical exam is sufficient to rule in or rule out SAH. Within 6 hours of symptom onset, noncontrast cranial CT is highly accurate, while a negative CT beyond 6 hours substantially reduces the likelihood of SAH. Based on anticipated treatment benefit of 80% with immediate aneurysmal SAH (aSAH) diagnosis, LP risk of 1% (including significant post dural headache, discomfort, and infection), and total (cancer + other) risk for angiography of 2%, LP would only be helpful in a pre-LP probability of aSAH range of 2-7% (if using visible xanthochromia) or 2-4% (if using CSF RBC > 1000). Therefore, LP appears to benefit relatively few patients within a narrow pretest probability range. Scenarios in which false negative CTs occur include imaging more than 6 hours after the onset of the headache, less severe bleeds (as measured by Hunt-Hess classification), anemia (Hg <10), and non-neuro-radiologists readers. In those scenarios, patient shared decision making about additional testing (CTA or LP) is warranted but should include informed consent about anticipated benefits and potential harms. With improvements in CT technology and an expanding body of evidence, test thresholds for LP may become more precise, obviating the need for a post-CT LP in more acute headache patients. Existing SAH clinical decision rules await external validation, but offer the potential to identify subsets most likely to benefit from post-CT LP, angiography, or no further testing.
Decision analysis using accepted methods but sparse, low quality evidence-basis demonstrates post-CT LP test threshold that is significantly higher than the majority of ED headache patients in whom SAH is considered. Nonetheless, the similar test-threshold point-estimate for post-CT LP in comparison to Carpenter et al (which used very different methods to derive a test-threshold), indicates that current textbook and guideline recommendations (which advocate post-CT for everyone) merit revision.
The discussion amongst the Journal Club attendees focused around two issues.
1) Some neurosurgeons expressed concerns about the risk of aneurysmal re-bleeding within 24 hours, which has been recognized to be about a 6% risk since observational studies were conducted in the 1950’s. Some of the neurosurgeons also noted that no test is perfect, so their traditional approach has been that more testing is generally better for patients and providers. Dr. Sivilotti made the point that attempts to attain diagnostic perfection (0% miss rate) leads to operational inefficiencies for the hospital and at eventually harms some patients.
2) Some noted uncertainty about how to derive the initial pre-test probability estimate by which to guide post-CT decision-making about the benefits vs. harms of LP. It was noted that the PGY-III meta-analysis included weighted average prevalence of SAH of 7.5% in recent prospective studies of headache patients in whom SAH is a concern. This meta-analysis prevalence is one evidence-based estimate of pre-test probability. The point was then made that the 7.5% probability of SAH falls within the 2%-7% range for which LP benefits outweigh risks, but the meta-analysis authors noted that the 7.5% prevalence estimate is pre-CT. Whether a CT is obtained less than 6-hours post-headache onset (summary LR- = 0.01 so post-CT probability of SAH = 0.1%) or later than 6-hours (summary LR- = 0.07 so post-CT probability of SAH = 0.6%). Either way, the post-CT probability of SAH is far below the lower margin of benefit (2%). The issue with CT diagnostic accuracy beyond 6-hours is the 95% Confidence Interval which extends to 0.61, so these situations require more careful contemplation and perhaps offer a role for CT angiography – a proposal to which neurosurgery eagerly agreed requires additional research at Washington University.
Future Research Priorities
Carpenter et al. was grounds for practice-change for many ED diagnosticians, but has also received criticism – mainly based on philosophical grounds. Is the role of emergency medicine to definitively rule-in or rule-out particular diagnoses or simply to exclude (or minimize) life-threats without putting a label of certainty on symptom etiology? Based upon the personal observations that thunderclap headache represents a potential life-threat that is not SAH in over half of cases and that a CT/LP approach alone will fail to identify other life-threats without consulting Neurology and/or Neurosurgery with additional advanced imaging at the discretion of these consultants, some argue that diagnostic research based upon ruling-in or ruling out SAH alone is inherently flawed and not worthy of altering the long-standing practice paradigm of CT then LP if CT is non-diagnostic. However, this approach neglects to consider the potential harms of LP, the number of additional studies required to save one additional life or avert one bad outcome, or the role for shared decision-making. It is worth noting that the perspective that thunderclap headache more commonly represents a non-SAH life-threatening diagnosis than SAH or migraine is counterintuitive to most ED research. This phenomenon likely reflects spectrum bias in that sicker patients with more likelihood of life-threatening disease and poor outcomes are being labeled as “thunderclap headache” than is the norm in published research and common practice. Indeed, a subset of acute headache patients who do benefit from post-CT LP almost certainly exists, including those with hemoglobin less than 10 g/dL (increased risk of false-negative CT), significant delay between headache onset and CT, and those in whom diagnoses like CNS infection or pseudotumor cerebri are being contemplated. The art of medicine is identifying the individual patients who are more likely to benefit from additional testing rather than applying a one-size fits all that mandates routine post-CT LP.
Many questions remain unanswered. Can general radiologists rule-out aneurysmal SAH as well as neuro-radiologists? Will clinical decision aids provide an additional risk-stratification mechanism by which to identify subsets more likely to benefit from post-CT LP? Is there a role for CT angiography (CTA) following a non-diagnostic non-contrast CT? If so, which patients would benefit from CTA?
Bottom Line: The decades old dogma that acute headache patients in whom SAH is a consideration must uniformly undergo an LP following a non-diagnostic CT appears is unnecessary for a large subset of these patients and may leads to harms via additional downstream testing that results from the imperfect, non-specific findings in CSF.
Risk of Delayed Traumatic ICH in Patients on Anticoagulation
Jun 27, 2017
Journal Club Podcast #35: May 2017
After a brief interlude where I shamelessly self-promote my new novel, I talk about the risks of delayed bleeding in head injury patients on warfarin...
You are working a moonlighting shift at a local level II trauma center when you meet Mr. X, a 68 year old gentleman with a history of atrial fibrillation, for which he takes diltiazem for rate control and warfarin for anticoagulation. He sees his primary care physician on a regular basis and has his international normalized ratio (INR) checked once a week. It has been between 2.0 and 3.0 consistently for the last 6 months. This morning, while walking his dog, a rare crossbreed known as a great doodle (a cross between a great Dane and a poodle), he was tripped up by the leash and fell forward, striking his forehead on the concrete. He suffered no loss of consciousness, has a mild headache, and has had no nausea or vomiting. His wife states that he has had no altered mental status since the fall.
On exam he has a GCS of 15, a superficial abrasion to his forehead with a small 4 cm hematoma, no cervical spine pain or tenderness, and a normal neurologic examination. His INR today is 3.2. Being an astute reader of the literature, you remember that the studies on the Canadian Head CT rules excluded patients on anticoagulation, and proceed to order a head CT, which is read as normal by the attending radiologist (not a neuroradiologist).
After updating the patient’s tetanus booster you discharge him home in the care of his wife. That night after your shift, you begin to worry about your patient and his risk of delayed intracranial hemorrhage given his anticoagulant use. You also wonder if his risk would be higher if he were on one of the novel anticoagulants (rivaroxaban, apixaban, etc). Unable to sleep, you head online and begin to search the literature for answers.
PICO Question:
Population: Patients on anticoagulation therapy suffering minor head injury
Intervention: Observation and/or repeat CT scan of the head CT
Comparison: Discharge after normal initial head CT
Outcome: Risk of delayed intracranial hemorrhage leading to a change in management
Search Strategy:
A previous journal club covering delayed intracranial bleeding in the setting of anticoagulation was conducted in 2012 (http://emed.wustl.edu/Journal-Club/Archive/August-2012). The online archive was searched and two of the articles were chosen for inclusion. PubMed was searched using the terms anticoagulation AND “delayed intracranial hemorrhage”, resulting in 7 articles (http://tinyurl.com/y9yjn3go). Among these, a systematic review and meta-analysis was chosen as well as one of the articles included in the systematic review.
Bottom Line:
Traumatic brain injury results in just over 1.3 million emergency department (ED) visits, 275,000 hospitalizations, and 52,000 deaths annually in the United States alone, with an increase in the combined rate of ED visits, hospitalization, and death from 521 per 100,000 in 2001 to 823.7 per 100,000 in 2010 (CDC TBI Report). In elderly patients suffering a fall, long-term anticoagulation has been shown to increase not only the incidence of intracranial hemorrhage (ICH) compared to those not on anticoagulation (8.0% vs. 5.3%, p < 0.0001), but to also increase mortality in those with ICH (21.9% vs. 15.2%, p = 0.04) (Pieracci 2007). Additionally, the use of warfarin prior to blunt head trauma has been shown to increase mortality compared to those not taking anticoagulants, with an odds ratio of 2.008 (95% CI 1.634-2.467) (Batchelor 2012). Unfortunately, the rate of pre-injury warfarin use has been increasing in trauma patients in the US, from 2.3% in 2002 to 4.0% in 2006 (P < .001); in patients older than 65 years, use increased from 7.3% in 2002 to 12.8% in 2006 (P < .001) (Dossett 2011).
Given the increasing number of head injury patients seen in the ED, and the increase in concomitant anticoagulant use, the clinical dilemmas surrounding these patients have become more and more relevant. Studies in patients taking warfarin who suffer minor head injury have shown incidences of ICH ranging from 6.2%-29% (Li 2001, Gittleman 2005, Brewer 2011), leading some authors to conclude that most, if not all, such patients should undergo routine cranial CT scanning on presentation (Brewer 2011, Cohen 2006, Fabbri 2004). One important question surrounds the prognostic implications of a normal cranial CT in head injury patients on anticoagulant therapy. While some European guidelines suggest that all anticoagulated patients with head injury should be admitted for a period of routine observation (Vos 2002, Ingebrigtsen 2000), these recommendations are not based on studies of the prevalence of delayed ICH.
In the three primary studies reviewed, the incidence of delayed ICH following normal CT scan in patients taking warfarin ranged from 0.6% to 6%; the meta-analysis revealed a pooled risk of 0.6%. However, if a diagnosed ICH has no affect on the patient’s outcome or treatment, then it would be considered a surrogate outcome, which is often “used as a substitute for a clinically meaningful endpoint that measures directly how a patient feels, functions or survives (Thomas 1995).” As such outcomes are often found to be clinically insignificant, their use has been questioned in the literature (Guyatt 2011, Fleming 1996), and the incidence of patient important outcomes should be considered instead. In these studies, the majority of patients found to have delayed ICH required no neurosurgical intervention and had no adverse outcome documented. The incidence of death or neurosurgical intervention ranged from 0 to 1.1%, with a pooled incidence of 0.13% in the meta-analysis.
The authors of one of the articles reviewed suggest that “our data support the general effectiveness of the European Federation of Neurological Society’s recommendations for 24-hour observation followed by a repeated head CT scan for anticoagulated patients with a minor head injury” (Menditto 2012). However, this conclusion is based on the incidence of delayed ICH (6%) rather than the incidence of clinically important outcomes (1.1%). In this study, only one patient out of 87 suffered clinically significant delayed ICH. It is mentioned in the study that one patient showed signs of neurologic deterioration, however they do not say if this was the same patient who required neurosurgical intervention. If so, this would suggest that observation alone would suffice to detect any clinically significant delayed ICH.
Additionally, the authors do not perform a cost-effectiveness analysis to support their conclusion. In a subsequent editorial appearing in the same journal, it is suggested that a protocol of 24-hour observation and routine repeat CT scanning would cost an average of just over $1 million per patient saved (Li 2012). The author of the editorial suggests that home observation and phone call follow-up would be more cost-effective, and likely as safe, though this has not been studied.
While the current literature does not support routine hospital observation for 24 hours or repeat cranial CT scans in all anticoagulated patients with head injury, this may be warranted in those at increased risk of delayed bleeding, such as those with supratherapeutic INR levels or concomitant antiplatelet therapy. Further studies are needed to identify these higher risk patients for delayed bleeding to determine appropriate management. Furthermore, as newer anticoagulants enter the market and begin to replace warfarin, such as apixaban, dabigatran, and rivaroxaban, further studies on the risk of delayed hemorrhage may be necessary to determine the best management strategy for patients on these medications.
Blood Pressure Management in Spontaneous Intracerebral Hemorrhage
Nov 23, 2016
Journal Club Podcast #34: October 2016
A short discussion of optimal blood pressure management in ICH, looking at intensive BP control vs. guideline-based control...
It's a slow Sunday morning in TCC when you get a page that a fifty-year-old patient with a sudden onset of confusion is en route. Per the page, the patient's BP is 210/110, with a heart rate of 85. The patient arrives to the ED awake, alert, but clearly confused. He answers all questions inappropriately and only follows commands with repeated questioning. His BP is still elevated (208/113).
After getting a finger-stick blood sugar, which is 105, a normal ECG, and sending off labs, you rush the patient for a head CT. The CT shows an intraparenchymal hemorrhage in the left basal ganglia without intraventricular extension. The volume of blood is about 30 mL.
You immediately place a consult to neurology while the patient is transported back to the room. The nurse rechecks the patient's BP, which is now 218/115, and asks what you would like to do to treat this hypertension. You remember learning that when managing BP in intracerebral hemorrhage (ICH) there is a balance between reducing further bleeding and perfusing the rest of the brain, but you aren't sure what the optimal goal BP is or how quickly you should try and achieve this goal. After discussing this with the neurologist, you decide to do a quick literature search and see what the evidence shows...
PICO Question:
Population: Adult patients with spontaneous ICH and elevated BP
Intervention: Aggressive lowering of blood pressure
Comparison: Standard lowering of blood pressure (i.e. SBP below ~180 mmHg)
Outcome: Death, functional status, quality of life, cost, length of stay
Search Strategy:
A PubMed “Clinical Queries” search was performed using the terms “intracerebral hemorrhage” AND “blood pressure” with category set to Therapy and scope to Broad. The search was then limited to studies published in the last 5 years using human subjects (http://tinyurl.com/zhj7zgs). This strategy resulted in 143 articles, of which 3 randomized controlled trials and 1 meta-analysis were chosen. The Cochrane database of systematic reviews was also searched, but did not identify an additional meta-analysis.
Bottom Line:
Optimal blood pressure management in patients with spontaneous intracerebral hemorrhage (ICH) is complicated by the balance between hematoma size and cerebral perfusion. An association between maximum systolic blood pressure and hematoma enlargement has been shown (Ohwaki 2004), but must be tempered by the opposing risk of reduced cerebral blood flow with overly aggressive reductions in blood pressure (Butcher 2003). A small study published in 2013, however, found no difference in relative perihematoma blood flow in patients treated with more aggressive BP goals (SBP < 150 mmHg) compared to traditional goals (SBP < 180 mmHg). This finding opened the door to further clinical research on the effects of aggressive BP lowering in ICH.
In 2010, the AHA guidelines for management of ICH suggested that in patients with significantly elevated blood pressure (SBP > 180 or MAP > 130), a “modest reduction” in BP should be considered (SBP < 160, MAP < 110). This arbitrary BP goal was challenged in 2013 by the publication of the INTERACT2 trial. This seminal trial compared traditional BP management to more intensive BP lowering (goal SBP < 140 mm Hg within one hour), and found no statistically significant improvement in the primary outcome of functional status. The authors performed a post hoc analysis of the data, however, and did find an improvement in functional status using the newly popularized “ordinal analysis” of the data. This finding spurred further debate, resulting in additional studies on this subject.
A meta-analysis published in 2014 sought to shed further light on this subject, pooling the results of the INTERACT2 trial with its pilot study (INTERACT1), the ICH ADAPT study, and one additional study that lacked the funding to devise a kickass acronym. This meta-analysis confirmed the results of the INTERACT2 trial, which is far from surprising when you consider that the vast majority of patients (~85) came from that particular study.
Earlier this year, the ATACH-2 trial was published. Using similar methodology to the INTERACT2 trial investigators, the authors of this international, multi-center trial also found no statistically significant improvement in functional outcomes with more aggressive BP reduction, thereby confirming the prior results of INTERACT2 and the pursuant meta-analysis.
Medical Expulsive Therapy (Tamsulosin) for Ureteral Colic
Oct 27, 2016
Journal Club Podcast #33: September 2016
A brief discussion of the growing literature on the use of tamsulosin (and sometimes nifedipine) in ureteral colic...
You are working in TCC one busy evening, kicking ass and saving lives. In the middle of the primary survey of a critically ill level one trauma, you are suddenly hit by a sharp, 10 out of 10 pain in your right side. Thinking that Doug Schueurer might have punched you, you turn around swiftly and see that he is on the other side of the room. After the patient is stabilized, you run to bathroom and begin vomiting. Dr. Wagner knocks on the door and tells you to quit being dramatic and get back to work, which you faithfully do.
The pain continues though to the end of your shift, at which time you check yourself in as a patient. You vitals are stable, with a heart rate of 105. You have some improvement in your pain with IV morphine and toradol. Your creatinine is normal and your UA shows a moderate amount of blood, with no signs of infection. An ultrasound (which you remember from a previous journal club is useful for diagnosing ureteral stones) reveals a 4 mm stone in the right distal ureter with mild hydronephrosis.
After tolerating a PO challenge (and yes, eating one of our turkey sandwiches is a challenge), you are ready to home. You leave with prescriptions for zofran, vicodin, and Flomax. Having heard horror stories about people developing orthostatic hypotension while taking Flomax, you wonder if there is any real efficacy. When you get home, you decide to do a literature search and see what the evidence shows.
PICO Question:
Population:Adult patients with ureteral stones not requiring urgent surgical intervention
Outcome: Time to stone passage, pain level, need for surgical intervention, quality of life, patient satisfaction
Search Strategy:
The articles chosen for the 2008 journal club were reviewed, and the meta-analysis used at that time was chosen as one of the articles. PubMed was then searched using the terms “tamsulosin AND (stones or colic)” limited to the last 10 years (http://tinyurl.com/jj8qysm). This results in 167 studies, of which 3 relevant randomized controlled trials were selected.
Bottom Line:
In 2008, the Washington University emergency medicine journal club looked at the efficacy of medical expulsive therapy in the management of ureteral stones. The conclusion at that time, based largely on a systematic review and meta-analysis from the Annals of EM the year before, was that tamsulosin and nifedipine may “improve moderate sized (more than 5mm) distal kidney stone expulsion rates compared with standard medical therapy.” This review did suggest the need for further large randomized controlled trials to further evaluate this topic, given that the results were based largely on “Low-quality RCT’s.”
Since then, several larger RCT’s have been performed. One study with mostly small stones (70% being 2-3 mm in diameter) found that tamsulosin did not improve time to stone expulsion or need for urgent intervention (Vincendeau 2010). This finding was supported by a subsequent trial in which ~75% of stones were < 5 mm in diameter (Pickard 2015). Despite a trend toward improved spontaneous stone passage at 4 weeks (the primary outcome) in patients with stones > 5 mm in size receiving tamsulosin, the authors of this paper haughtily conclude that “further trials involving these agents for increasing spontaneous stone passage rates will be futile.” Ignoring this advice, an additional study was recently published in Annals of EM (Furyk 2016). While this study also did not demonstrate improved stone passage when considering all patients (absolute risk reduction 5.1%; 95% CI -3.0% to 13.0%), a prespecified subset analysis of patients with stones 5-10 mm in diameter resulted in a significant improvement in this outcome (ARR 22.4%; 95% CI 3.1% to 41.6%; NNT = 4.5.)
This body of data, overall, suggests that tamsulosin likely provides no benefit to patients with small kidney stones (i.e. those smaller than 5 to 6 mm in diameter), but does seem to provide benefit in larger stones. A recent meta-analysis that includes all of these studies came to a similar conclusion (Wang 2016). For patients with stones 5-10 mm in diameter, this meta-analysis found an ARR of 22% (95% confidence interval 12% to 33%) with a NNT of 5.
Anticoagulation of Isolated Calf DVTs
Oct 03, 2016
Journal Club Podcast #32: August 2016
A quick review of the literature on anticoagulation in isolated below the knee DVTs, and why we still don't have an answer...
Mr. M is a 53-year-old patient with a history of high blood pressure and high cholesterol who flew back to St. Louis from Shanghai five days ago. Two days after getting back he noted pain and swelling in his left calf, which he thought was due to a muscle strain while getting off of the airplane. Since then, the swelling has remained constant while the pain has worsened slightly. He reports a dull ache that is worse with ambulation and improves with rest. He denies shortness of breath or chest pain.
On physical exam his vitals are normal, his lungs are clear, and he has a normal S1 and S2 without a cardiac murmur or gallop. He has mild swelling noted to the left calf with focal calf tenderness. There is no erythema, warmth, or induration, and his pulses are symmetric bilaterally.
Concerned for a possible DVT given his recent long flight, you obtain lower extremity dopplers which reveal echogenic material in the peroneal and soleal veins consistent with acute DVT, but no signs of DVT proximal to the calf.
You call the patients PMD to discuss management, and she recommends sending the patient home with no therapy because, "It's just a calf DVT. You don't treat those." While you understand that has been the classic teaching, you also remember listening to an ERCAST Podcast in the last couple of years that spoke about the controversy surrounding this dogma. You send Mr. P home and recommend repeat dopplers in 5-7 days, but later decide to search the literature in order to make your own evidence-based decision...
PICO Question:
Population: Adult patients with isolated calf DVT distal to the popliteal veins
Intervention: Therapeutic anticoagulation (heparin, low molecular weight heparin, factor Xa inhibitor, direct thrombin inhibitor, or vitamin K antagonist)
Comparison: No anticoagulation.
Outcome: Propagation of clot to the popliteal veins or beyond, PE, bleeding, death, cost, patient satisfaction, quality of life.
Search Strategy:
An advanced PubMed search was conducted using the strategy “calf AND (thrombosis OR DVT) AND anticoagulation” (http://tinyurl.com/hx4onng) with 180 articles resulting. From these, the four most relevant articles were chosen.
Bottom Line:
The management of DVTs isolated to the calf veins has remained a controversial topic for many years. The 2008 guidelines produced by the American College of Chest Physicians (ACCP) recommended long-term (3 month) anticoagulation for all patients with DVT, regardless of proximal extension. Updated ACCP guidelines produced this year, on the other hand, make no specific recommendation, offering options of either treating or not treating isolated calf DVTs, as long as surveillance ultrasounds are performed. This lack of a firm recommendation seems based less on actual evidence, as it is on the low quality of evidence, which we will review.
This clinical conundrum has persisted for decades, with research extending at least into the 1980s. In 1985, Lancet published a randomized, controlled trial from Sweden (Lagerstedt 1985), in which patients with DVT isolated to the calf vein (as detected by phlebography) received either warfarin or no further anticoagulation (following a 5-day course of IV heparin in both groups). With 52 patients randomized, the authors found that the risk of recurrent clot was significantly lower in the warfarin group, with a NNT of 3.5 (95% CI 2.2 to 8.5). Unfortunately, this does not seem to be a very patient-centered outcome, and no patient in either group developed a PE.
In 2010, an article published in the Journal of Vascular Surgery (Schwarz 2010) randomized patients with isolated calf muscle vein (soleal or gastrocnemius) DVT to naroparin or compression therapy alone. They found no difference in progression of clot into either deep calf veins (peroneal or posterior tibial) or proximal veins, with a RR of 0.98 (95% CI 0.14 to 6.7). Unfortunately, this small study was poorly reported and failed to adhere to CONSORT guidelines, making assessment of internal validity nearly impossible. Additionally, the inclusion of calf muscle veins only, which may be less likely to propagate or result in PE, would not detect benefit in patients with deep calf vein clot.
A recent pilot randomized, controlled trial conducted in the UK (Horner 2014) randomized patients with any isolated calf DVT to either warfarin or anti-inflammatory medication. Seventy patients were analyzed, and for the composite outcome of proximal propagation, symptomatic PE, VTE-related sudden death, or major bleeding, they found a statistically nonsignificant trend towards benefit with anticoagulation (absolute risk reduction 11.4%, 95% CI -1.5% to 26.7%). Although the study was small, it was designed to demonstrate the feasibility of a much larger study, which is currently underway. Such a study will provide the best evidence to date regarding this treatment.
The most recent evidence on this subject was a retrospective observational study conducted at UC Davis. Outcomes of patients with isolated calf DVT were assessed based on whether or not there was an intention for them to receive any form of anticoagulation. The composite outcome of proximal DVT or PE was less likely in those treated with anticoagulation (RR 0.36, 95% CI 0.15 to 0.84). This benefit persisted after adjustment was made for several confounders. Unfortunately, observational studies are never able to control for the unknown confounders, and such evidence does little to inform us of whether anticoagulation provides any actual benefit. Also, less than half the patients in the study underwent repeat testing to assess for propagation of DVT, and such testing was more likely to occur in the control group, biasing the results in favor of the treatment group. The study also included primarily inpatients, and less than 4% were from the ED.
The evidence reviewed here is notably lacking in large, randomized controlled trials of good methodology, making it difficult to make firm recommendations. The current Chest guidelines make sense given this lack of evidence, and the decision to treat or not treat isolated calf DVTs should be based on several factors, including location (muscle vs. deep veins), need for ongoing immobilization, and bleeding risk. Emergency physicians will have to work closely with primary care physicians and admitting teams when making such decisions, given the controversy surrounding this topic.
High Sensitivity Troponins
Aug 10, 2016
Journal Club Podcast #31: July 2016
A short monologue on the advantages and disadvantages of high sensitivity troponins and a 0/1 hour algorithm to rule-out and rule-in MI...
You are working your first ever shift as an intern in the Barnes-Jewish Emergency Department on July 1st when you encounter Mrs. P, a forty-year old woman whose chief complaint is chest pain. She reports that 1 hour prior to arrival, while sitting at her desk at work, she developed a sharp, stabbing pain in the left side of her chest. The pain was not worse with deep inspiration or with exertion, but has continued since then. It is currently rated a 7 out of 10 and has not radiated or migrated. She notes mild subjective shortness of breath and denies diaphoresis or lightheadedness.
Mrs. P tells you she a history of hypertension and high cholesterol and takes amlodipine and atorvastatin. She has never had any cardiac issues, but her father had his first MI in his early seventies. Her ROS is negative except as noted above and physical exam is completely normal, including no tenderness to palpation of the chest, no leg swelling, and no calf tenderness. Her ECG has already been performed and is normal, and her initial troponin (ordered in triage) is < 0.03.
After discussion with your attending, you decide she is low risk enough that you can forego a stress test, but your attending does think you should keep her for a repeat troponin to rule out MI. Since her pain began so recently, your attending suggests waiting the prerequisite 6 hours before doing so. As you explain this to the patient, she is baffled at why she has to wait so long. As you explain that it takes time for troponin to reach detectable levels in the bloodstream, you remember that the sensitivity of troponin has increased with newer assays, and that in Europe they use an ultra-high sensitive troponin. You wonder if using such an assay would allow you to rule out MI more quickly without adding the risk of missed disease. After your shift, you head to Google and being your search...
PICO Question:
Population: Adult patients presenting to the ED with chest pain
Intervention: One-hour serial troponin using a high sensitivity assay
Comparison: Traditional 6-hour (or 4-hour or 3-hour or 2-hour) serial cardiac enzymes
Outcome: Missed MI, major adverse cardiac event (MACE), or death
Search Strategy:
An advances PubMed search was performed using the terms troponin AND high-sensitivity AND myocardial infarction, all limited to article title (http://tinyurl.com/zluqok8). This resulted in 106 articles, from which the following four were chosen.
Bottom Line:
Chest pain is a one of the most common chief complaints among patients presenting to emergency department (ED) in the US, representing around 5% of all ED visits. Despite widespread testing, around 2-3% of patients with myocardial ischemia or infarction are discharged home from the ED, and missed MI accounts for more malpractice dollar awarded than any other single diagnosis. It is likely that risk of malpractice lawsuits has at least in part led to the high admission rates for chest pain seen in the US, which ranges from around 40% to as high as 80% in some institutions according to data from medicare beneficiaries. This is in spite of data suggesting that only 13-23% of patients presenting to the ED will ultimately have a diagnosis of acute coronary syndrome.
The most recent AHA guidelines suggest that patients with symptoms suggestive of myocardial infarction should have serial cardiac troponin I or T levels drawn at presentation AND 3 to 6 hours after symptom onset to detect any rise in the level over that time. However, high-sensitivity troponin assays used in Europe have shown the potential to allow a more rapid rule-out/rule-in approach in which serial enzymes are checked only 1 hour apart. This approach could potentially lead to more rapid disposition of patients from our already overcrowded EDs. The European Society for Cardiology (ESC) has already embraced these newer troponin assays, incorporating a 0/1 hour algorithm into their recommended diagnostic approach. We sought, therefore, to review the evidence supporting these high-sensitivity troponin assays and their use in 0/1 hour algorithms.
The four studies identified were all very similar in design. Two of the studies assessed troponin I assays (Jaeger 2016, Neumann 2016) while two assessed troponin T assays (Mueller 2016, Reichlin 2015). All four studies used some kind of algorithm with specific cut-offs for the assay as well as cut-offs for the change in value between 0 and 1 hour. Using the cut-offs, patients were either assigned a “rule-in” status, a “rule-out” status, or were considered to be in a gray zone if could not be placed in either of these groups. For example, the two studies evaluating high-sensitivity troponin T used the following definitions:
• Patients with initial hs-cTnT < 12 ng/L and Δ1 hour < 3 ng/L were assigned to rule out status.
• Patients with initial hs-cTnT ≥ 52 ng/L and Δ1 hour ≥ 5 ng/L were assigned to rule in status.
• All other patients were considered to be in the observation group.
Use of this algorithm (and similar algorithms in the articles evaluating high-sensitivity troponin I) resulted in high negative predictive values for ruling out patients in all 4 studies, ranging from 98.9% to 100%, and allowed anywhere from 39% to 63.4% of patients to be ruled out within one hour. The positive predictive values for ruling patients in were significantly lower, ranging from 70.4% to 87.1%.
While this evidence suggests that use of high-sensitivity troponin assays would allow the rapid rule-out of many patients presenting to the ED with chest pain, there are some valid concerns remaining. These include the relatively low positive predictive values of the algorithms (which may lead to unnecessary procedures such as cardiac catheterization), and the lack of a clear plan for patients in the observation or gray zones (which may again lead to inappropriate testing in these patients). Additionally, the articles themselves are prone to significant bias, in part because of the heavy use of industry funding in all four studies (Ioannidis 2016). In all of the studies, for example, the adjudicated final diagnosis was based primarily on the results of the serial high-sensitivity cardiac troponin assays themselves, raising the specter of incorporation bias.
Most importantly, the current evidence is limited to providing negative and positive predictive values for the various algorithms. Future studies will need to assess the clinical impact of using the algorithms on decision making, disposition times, and patient outcomes in order to demonstrate efficacy and safety. It likely the lack of such studies that has kept these algorithms out of our practice in the US, and out of the AHA guidelines.
Ketamine Analgesia for Acute Pain in the Emergency Department
May 19, 2016
Journal Club Podcast #30: April 2016
A brief discussion in the use of ketamine for pain control in the ED, weighing both the evidence and the anecdote...
It's two o'clock in the afternoon during a typical weekend TCC shift, when you get a page that a triage patient is coming to room 4. You look at the chart and see that it's a 45-year-old man who was riding his horse and got bucked off, landing on his left arm in an awkward position. The triage nurse has noted a deformity to the upper arm with intact pulses and sensation distally. You notice that the triage doctor has ordered IV lidocaine for the man, which you find most confusing.
Your evaluation of the patient confirms much of the triage note. He has swelling and bruising to the left upper arm, concerning for a mid-shaft humerus fracture. He is neurovascularly intact and has stable vital signs. He denies any past medical history and has no allergies.
You decide to ask your attending about the IV lidocaine, which the patient says hasn't really helped his pain, but Dr. Cohn just shakes his head and sighs. You then ask about giving the patient low-dose ketamine for his pain, having recently listened to a podcast on the subject (SGEM#130: Low Dose Ketamine for Acute Pain Control in the Emergency Department). Dr. Cohn laughs in your face and turns his back on you. Later, he comes back and explains that while he admittedly hasn't read the literature on the subject, he isn't a big fan of replacing one addictive substance with another, especially one that's less familiar to him.
Curious about what the evidence actually shows, and wanting to be prepared to look smart in front of your pompous attending, you decide to search the literature and see what the evidence actually shows...
PICO Question:
Population: Adult patients in the ED with acute pain
Intervention: IV ketamine
Comparison: Usual care with IV opiate analgesia
Outcome: Pain control, adverse events (hypoxia, respiratory depression, dysphoria, etc.), patient satisfaction, ED length of stay
Search Strategy:
PubMed was searched using the terms “ketamine AND analgesia AND emergency”, resulting in 169 articles (http://tinyurl.com/znjvzc4). The titles of these were searched, and the four most relevant articles chosen.
The literature on this topic is still somewhat limited, primarily comprising small studies that compare morphine to ketamine alone, or to combination therapy with morphine and ketamine. One of the earliest of these studies (Galinski 2007) enrolled patients with “severe acute pain” following trauma, being transported by mobile intensive care units staffed by physicians. Patients received either morphine and ketamine, or morphine alone. They demonstrated similar reductions in pain scores at 30 minutes with higher morphine consumption (0.2 mg/kg vs. 0.15 mg/kg) among those not receiving ketamine. Notably, the incidence of neuropsychological adverse effects was much higher among those receiving ketamine (36% vs. 3%).
A study out of the Rhode Island Hospital ED (Beaudoin 2014) compared the use of morphine alone to the use of morphine with two different doses of ketamine (0.15 mg/kg and 0.3 mg/kg) among patients with moderate to severe acute pain. Twenty patients were enrolled in each group, and they found that pain reduction was better among those receiving ketamine, with no difference in pain reduction between the two ketamine groups. Oddly, the amount of rescue analgesia provided was not different among the three groups, suggesting that perhaps those patients receiving only morphine were inadequately treated, which may account for the differences observed in the primary outcome, rather than a benefit due to ketamine. Length of stay was over half an hour longer in the two ketamine groups than it was among those receiving morphine only.
A third study, conducted in Isfahan, Iran (Majidinejad 2014), enrolled 126 patients with long bone fractures, randomizing them to receive either morphine or ketamine (here at a rather high dose of 0.5 mg/kg). They looked at pain control just 10 minutes after drug injection, and found similar rates of pain control, with a successful decrease in pain severity seen in nearly all patients in both groups at 10 minutes (93.7% of patients in the ketamine group and 96.8% in the morphine group). Almost 10% of patients receiving ketamine developed emergence phenomena, while no adverse effects were seen in those receiving morphine. This study did a poor job reporting data according to Consort guidelines, making interpretation of the results difficult.
The finally study out of Maimonides Medical Center in Brooklyn, NY (Motov 2015) enrolled 90 patients with acute abdominal, flank, back, or musculoskeletal pain, and randomized them to receive either morphine or ketamine (0.3 mg/kg). They found no difference in the primary outcome, reduction in pain at 30 minutes, but did demonstrate a significantly higher need for rescue analgesia with IV fentanyl at 120 minutes.
While these studies certainly confirm the analgesic effects of IV ketamine in patients with acute pain, they do not necessarily demonstrate a benefit over typical IV opiate administration. Instead, there seems to be a higher incidence of adverse effects (primarily dysphoria), increased ED length of stay, and a potentially greater need for rescue analgesia, likely owing to the relatively short duration of action of ketamine. Unfortunately, patients for whom we would typically consider ketamine as an alternative to opiates due to concerns regarding respiratory or cardiac depressive effects (i.e. the elderly, those with pulmonary compromise, or those with unstable vital signs) were understandably excluded from all of these studies. While it seems biologically plausible that ketamine would provide adequate analgesia in such patients with a reduction in the risk of major adverse events, there is currently no data in the literature to directly support this (external validity).
Age-Adjusted D-Dimer to Rule Out Pulmonary Embolism
Apr 07, 2016
Journal Club Podcast #29: February 2016
A monologue on the ins and outs of using an age-adjusted D-dimer to rule out pulmonary embolism in patients over 50...
You are working in a community ED one afternoon when you encounter Mrs. X, a pleasant 65-year old woman with a history of hypertension and osteoporosis, who is in town visiting her grandchildren from California. She flew in 2 days earlier, and for the last 12 hours has noted some right-sided, pleuritic chest pain. She thinks she pulled a muscle picking up her 3-year old grandson, but was worried and wanted to be evaluated.
Her physical exam is unremarkable, including a heart rate of 70, a normal oxygen saturation on room air, and a normal respiratory rate. Her lungs are clear and she does seem to have pain with deep inspiration. There is no chest wall tenderness, no LE swelling, no LE cords, and no calf tenderness.
Given her recent plane trip you are concerned about a possible PE. Having attended a journal club on reducing PE protocol CT ordering rates during residency, you calculate her Well's score and find that she is low risk, so order a D-dimer in addition to an ECG and chest x-ray. The ECG and CXR are normal, but the D-dimer is elevated at 600 mcg/mL FEU. You explain to the patient that she will need to have a CT scan to evaluate for a PE, at the same time remembering that you've heard about age-adjusted D-dimer. You wonder if you could adjust the upper limits of normal for this patient, given her age, and hence obviate the need for the CT.
Too busy to look this up now, you continue your shift. The CT comes back negative and you discharge the patient on NSAIDs, but after your shift, you decide to look more closely at the evidence.
PICO Question:
Population: Patients aged > 50 years with a clinical suspicion of PE and non-high pre-test probability.
Intervention: An age-adjusted D-dimer cutoff.
Comparison: A conventional D-dimer cutoff.
Outcome: Sensitivity, specificity, likelihood ratio, false negative rate, and decrease in additional confirmatory testing (CT, VQ scan).
Search Strategy:
PubMed was searched using the terms “age adjusted d-dimer” (http://tinyurl.com/jfa4t62). This resulted in 148 articles, from which the following 4 were selected.
Bottom Line:
In patients who are not high-risk for pulmonary embolism, D-dimer has been shown to be effective at ruling out disease. Unfortunately, this test also has a low specificity and a high false-positive risk. This risk increases (and specificity decreases) substantially with increasing age. The specificity decreases incrementally with each decade increase in age, from around 67% in patients < 50 years of age to nearly 15% in those over 80 (Schouten 2013). As a result, some have proposed adjusting the D-dimer cutoff based on age. A derivation study using prospective cohorts of 1721 patients from Switzerland and France determined that the Age (years) * 10 µg/L was the optimal cutoff for patients over 50 (Douma 2010). This formula assumes the use of an assay that reports results in fibrinogen equivalence units (FEU) with a standard cutoff of 500 µg/L. This same study attempted to validate this formula in two similar European patient cohorts, resulting in 11.2% (95% CI 9.3-13.3%) and 18.2% (95% CI 15-21.4%) increases in the number of patients with a negative D-dimer in each of the cohorts, with very small increases in the rates of missed PE (0.4% and 0.2%, respectively). The major limitation of this study was that all of the cohorts were comprised of fairly homogenous Western European patients.
A retrospective study conducted using medical records from Kaiser Permanente Southern California (Sharp 2016) demonstrated an increase in the specificity of D-dimer in patients over 50 years of age from 54.4% (95% CI 53.9-55.0) using the standard cutoff to 63.9% (95% CI 63.4-64.5) using this age-adjusted cutoff. While this was accompanied by a decrease in sensitivity from 98.0% (95% CI 96.4-84.2) to 92.9% (95% CI 90.3-95.0), the overall negative likelihood ratio of the age-adjusted cutoff was 0.11 (95% CI 0.08-0.15) suggesting a significant ability to rule out disease in non-high risk patients. A systematic review and meta-analysis demonstrated an even better ability to rule out PE, with a negative likelihood ratio of 0.05 in patients over 50 years of age (Schouten 2013). The primary limitation of this meta-analysis was the significant risk of verification bias in all but one of the included studies.
Having thus been derived and validated in several different populations, the age-adjusted cutoff has also been evaluated in clinical practice (Righini 2014). In a study conducted at 19 hospitals in 4 countries in Europe (Belgium, France, Switzerland, and the Netherlands), the use of an age-adjusted cutoff resulted in an absolute decrease in the number of patients requiring additional testing of 11.6% (95% CI 10.5-12.9). Out of 331 patients with a D-dimer between the conventional cutoff and the age-adjusted cutoff, only one was later found to have a venous thromboembolism, for an overall failure rate of 0.3% (95% CI 0.1-1.7).
The use of an age-adjusted cutoff has therefore been shown to significantly reduce the need for further confirmatory testing in non-high patients, thereby reducing the risks associated with CT scanning (e.g. contrast-induced nephropathy [Mitchell 2010]). Unfortunately, this comes with a small, but not insignificant increase in the false negative risk, increasing the likelihood of missing a PE. Given the medicolegal atmosphere in this country, it will likely take some work to change clinical practice on a broad scale. The current clinical practice guidelines form the American College of Emergency Physicians (ACEP) makes only brief mention of adjusting the D-dimer cutoff for age, and does not make any recommendations regarding this practice. In order to change practice, there will likely need to be some support from our professional societies in this regard, as well as support from hospitals and physicians on other services (i.e. pulmonology, critical care, internal medicine, and hospitalists).
Finally, there are multiple D-dimer assays currently available using e a wide range of cutoff values to defined “normal” and “elevated” levels. Barnes-Jewish hospital currently uses the HemosIL D-dimer with an abnormal cutoff of 230 µg/L, which makes it difficult to adjust for age using the standard formula. Standardization of these assays, or the derivation of an age-adjusted cutoff for each assay, will therefore be necessary before we can adjust our concepts of normal and abnormal at institutions that do not use an assay with a cutoff of 500 µg/L.
Sterile vs. Non-sterile Gloves for Laceration Repair
Mar 09, 2016
Journal Club Podcast #28: January 2016
Chris Carpenter and Mike Willman get together to talk about sterile gloves, knowledge translation, and the slippery slope of evidence-based medicine...
Click Tab to Expand
Articles:
1st Years: Sterile Versus Nonsterile Gloves for Repair of Uncomplicated Lacerations in the Emergency Department: A Randomized Controlled Trial, Ann Emerg Med 2004; 43: 362-370. (http://pmid.us/14985664)
2nd Years: A pilot study on the repair of contaminated traumatic wounds in the emergency department using sterile versus non-sterile gloves, Hong Kong J Emerg Med 2014; 21: 148-152. (http://tinyurl.com/HongKongJEMGloves2014)
3rd Years: Comparing non-sterile with sterile gloves for minor surgery: a prospective randomised controlled non-inferiority trial, Med J Aust 2015; 202: 27-31. (http://pmid.us/25588441)
4th Years: Comparison of the Prevalence of Surgical Site Infection with Use of Sterile Versus Nonsterile Gloves for Resection and Reconstruction During Mohs Surgery, Dermatol Surg 2014; 40: 234-239. (http://pmid.us/24446695)
Vignette:
Mr. C. is a healthy 45 year old male who has been constructing a covered porch in his backyard (imagine it is springtime). While working on it earlier today, he tripped over a 2x4 and stumbled through a pane of glass. After getting up, he noticed that he had cut his right thigh. The brisk bleeding subsided after his wife held pressure with a towel for 15 minutes (he was unable to do so himself as he nearly fainted at the sight of his own blood). He did not hit his head or suffer any other injuries. Upon seeing the gaping wound, several centimeters long with subcutaneous fat poking through, she promptly brought him to the ED for stitches.
In the ED, his vitals are BP 126/85, P 68, RR 18, T 37.1°C, and 100% oxygen saturation on room air. He is in no acute distress, and his exam is unremarkable, except for a 6cm laceration to his right lateral thigh. It is less than 1cm deep with only subcutaneous fat showing. X-ray shows no fractures or foreign bodies. There does not appear to be any muscle or tendon injury and he has full range of motion. The bleeding has stopped and sensation is intact throughout. There are no visible vessels, nerves, or foreign bodies on exploration.
As you irrigate the wound with tap water (why tap water rather than normal saline? See July 2008 Journal Club and updated studies by Weiss 2013 and the Cochrane Collaboration), you carefully contemplate how you are going to repair it. You decide to use a 3-0 or 4-0 non-absorbable suture due to its location (but absorbable would arguably do as well, see February 2010 Journal Club). As you gather your supplies, your attending reminds to use sterile gloves and sterile technique while doing the repair. You remember that while doing other laceration repairs on previous shifts you have been told by some attendings that you needed to use sterile gloves, but others have said that using clean, non-sterile gloves worked equally well at less cost. You decide to learn what the literature has to say about sterile gloves for ED laceration repair.
PICO Question:
Population: Patients with acute, uncomplicated lacerations requiring closure in the ED
Intervention: Use of clean, non-sterile gloves during laceration repair
Comparison: Use of sterile gloves for laceration repair
Outcome: Rate of wound infection after repair
Search Strategy:
You search Pubmed for “sterile nonsterile gloves laceration repair” and get two results, one of which is in German. You broaden your search to “sterile AND (nonsterile OR non-sterile) AND gloves” and get 96 results (see http://tinyurl.com/zzt97jy). Only one of these studies pertains directly to ED laceration repairs, but at least it is a randomized, controlled trial so Brian Cohn will be ecstatic about that. You also conduct a Web of Science search to find any additional non-PubMed archived studies that have referenced the single ED RCT on this topic and you find one additional study in the Hong Kong Journal of Emergency Medicine. Lacking more ED studies, you decide to incorporate additional studies that compare non-sterile and sterile gloves for “minor surgery”, since laceration repair is a sort of “minor surgery” and the results could reasonably be extrapolated to the ED setting. After reviewing the 96 abstracts, you select the following four articles to review in detail:
Bottom Line:
Suture repair of traumatic lacerations occur daily in EDs around the world. The objectives of laceration repair include restoration of skin integrity with adequate post-healing cosmetic appearance, minimization of functional impairment, and avoidance of infection. Risk factors for laceration infection include diabetes, increased age, laceration width, laceration location (infection risk lower on head/neck than elsewhere), and presence of a foreign body (Hollander 2001). Many “traditional” practices continue through dogma rather than on the basis of scientific validity. Historically, suture closure was the most common method used to repair lacerations in the ED (Baker 1990), but dogma is being replaced by evidence as summarized on Skeptics Guide to Emergency Medicine (in 2012 and again in 2014), SOCMOB Blog, Life in the Fast Lane, and Manu et Corde. In addition, past Washington University EM Journal Clubs have explored the evidence-basis to support absorbable sutures and tap water irrigation. The current Journal Club challenges the long-held belief (dogma) that sterile gloves are required when repairing lacerations in the ED to prevent post-repair wound infections.
Research indicates that >105 organisms per mL are required to cause wound infection (Elek 1956, Robson 1973, Raahave 1986) in traumatic lacerations or post-op wounds. Although clean, non-sterile gloves carry a significantly higher bacterial load, the increased non-sterile glove bacterial load is not sufficient to cause infections (Creamer 2012). However, open boxes of gloves are more likely to carry bacterial contaminants, particularly when the gloves are wet (Luckey 2006, Hughes 2013). Sterile gloves at Barnes Jewish Hospital cost approximately $2.30 per pair versus $0.70 per pair of non-sterile gloves and in the busy ED frequent interruptions while suturing often translate into more than one pair of gloves being used. Therefore, an opportunity to reduce healthcare costs using non-sterile gloves during laceration repair exists. However, one 2003 Emergency Medical Journal Best Bets synopsis attempt to redefine a standard of care for ED wound repair using non-sterile gloves implied that insufficient evidence existed to ethically alter practice (Sage 2003).
Despite that 2003 Best Bets review, several decades of research imply that using clean, dry, non-sterile gloves (or no gloves) in acute care settings do not increase post-repair infection rates. In fact, in 1982 Bodiwala et al. observed no increase in infection rates for 418 laceration repairs using bare hand (no glove) wound repair when comparing surgical gloves (207 wounds) with no gloves (210 wounds). Similarly, another study in the mid-1980’s using bare hands for 25 patients and surgical gloves for 25 patients did not increase the incidence of complications in rural primary care (Worrall 1987). In this month’s Journal club we critically appraised two ED-based studies, one primary care study, and a surgery study, each of which evaluated the risk of post-wound repair infection using non-sterile, clean gloves.
Perelman 2004
A three hospital, Toronto ED-based, randomized controlled study which is both the most pertinent and the highest quality research to date on the topic of ED wound closure. The authors excluded 26% of patients with diabetes, renal failure, asplenia, immunodeficiency (acquired, congenital or immunosuppressive therapy), cirrhosis, keloids, current antibiotic use, treating physician perspective that prophylactic antibiotics were required (prosthetic heart valve, bites, contaminated wounds), high risk wounds (multiple trauma, open fracture, concomitant vascular, nerve or tendon injury, stab wounds, gunshot wounds, intra-articular wounds, animal and human bites, presentations >12 hours after injury, signs of infection at presentation, or suspected foreign body), or those who did not consent. They used explicit, previously defined criteria to define “wound infection” (Maitra 1986, Rutherford 1980, Gosnold 1977), cultured purulent infections at follow-up, and noted no increased risk of infection using non-sterile gloves. Critical concerns for this study that might limit confidence in the results included lack of reporting of physician-level characteristics (experience closing wounds), patient-level characteristics (health literacy, socioeconomic status), or system-level characteristics (access to primary care for follow-up wound care).
Ghafouri 2014
This Iranian ED study was deemed too biased with incomplete statistical analysis to meaningfully inform the equivalency or non-inferiority of non-sterile gloves for traumatic, contaminated wound closure. Among the many flaws identified were
· No blinding so significant potential for bias (skewed estimates of “truth” in observed outcomes) with biases by patients (ascertainment bias by better follow-up), physicians (co-intervention bias), or outcome assessors.
· High lost to follow-up rates without any sensitivity analysis to explore the potential effect on results if those lost to follow-up developed wound infections.
· No regression analysis to determine whether unequal distribution of confounders (age, limb, wound, sharp objects) accounted for differences in wound infection rate rather than sterile vs. non-sterile gloves.
· No standardization of either wound repair (Irrigation? Topical antibiotic?) or how to reproducibly define if “wound infection” occurred.
· Uncertain external validity to U.S. EDs (staffing of EDs, availability of outpatient follow-up).
· No assessment of patient compliance with oral antibiotics.
· No sample size calculation to quantify the risks of Type I and Type II error.
This Australian primary care study evaluated skin biopsies with suture repair noting no association between non-sterile glove use and skin infection rates at the time of suture removal. These results should be extrapolated to ED traumatic incisions cautiously, since trauma-related incisions requiring suture repair are often contaminated, irregular, and closed several hours after skin opening.
Mehta 2014
This Dermatology-surgeon study had limited applicability to ED laceration repairs: one surgeon performing thousands of Mohs Micrographic Surgery procedures did not detect significant change in SSI when using NSG. This has limited application to the average ED where many physicians repair traumatic lacerations that are frequently irregular, deep and contaminated. In additional, multiple flaws in this study included
· No explicit definition of “wound infection” (for examples see Maitra 1986, Rutherford 1980, or Gosnold 1977) which could affect both the accuracy and the reliability of whether wound infection was present or absent.
· Limited external validity to surgeons (only 1 surgeon performed every procedure) and to emergency medicine (different patients, different environment, and different injuries then Dermatology patients) so this is essentially an “N of 1” study for that surgeon.
General concerns expressed by Journal Club attendees included
1) Fear of a “slippery slope effect” in ED procedures beginning with appropriate non-sterile glove use for immunocompetent, low-risk patients and then extrapolating this to non-sterile use on higher risk patients and/or inappropriate (non-evidence based) use with lumbar punctures and central lines. The consensus opinion was that is the role for clinical educator oversight, bedside procedural teaching, and trainee (and attending) understanding of the published research.
2) Uncertain role for Shared Decision Making between clinicians and patients regarding the use of non-sterile gloves for wound repair in appropriate patients. There was no consensus on this debate, but hope that the 2016 AEM Consensus Conference would better highlight appropriate scenarios for ED Shared Decision Making. However, there was consensus that if a clinician decides to use non-sterile gloves and a patient or family member asks why non-sterile gloves are being used that the provider should be able to explain the rationale and safety for this decision based on the available evidence.
3) Several attendees debated the true cost-savings of this penny pinching discussion since much more expensive decisions are made daily in every ED, ward, and ICU. However, most attendees felt that with the volume of laceration repairs occurring in most EDs with frequent task interruptions necessitating gloves to be changed in an era of transparent cost-containment, using non-sterile gloves on immunocompetent patients with low risk wounds is worth considering.
Despite these concerns, the vast majority of attendees felt comfortable using non-sterile gloves for low-risk laceration repairs. In fact, most already were using non-sterile gloves based on their awareness on this topic from the social media world.
Balanced Fluid Resuscitation
Jan 27, 2016
Journal Club Podcast #27: November 2015
Evan Shwarz, Louis Jamtgaard and I sit down and chat about fluid selection for large-volume resuscitation...
It’s another busy day TCC, when an elderly female rolls in from triage with fever, cough, and a new oxygen requirement. Her vitals are T 38.3 BP 90/42 , HR 115, RR 24, SpO2 88% on RA. Even before you see the patient you are concerned for pneumonia with severe sepsis. You institute early antibiotics, fluids, serial lactates and systematically begin to aggressively resuscitate her. The patient requires nearly five liters of normal saline before her blood pressure stabilizes. Proud of your resuscitation, you tweet out #crushingsepsis and #normalsaline4life which gets an immediate response from Dr. Evan Schwarz, who happened to be trolling your twitter feed. He tweets “More like #increasedrenalfailure and #trybalancedfluids”. Inspired by his tweets you perform a brief literature review on the topic of ‘balanced fluid’ resuscitation.
PICO Question:
Population: Critically ill adult patients (e.g. patients with severe sepsis or septic shock)
Comparison: Chloride rich fluids (e.g. normal saline)
Outcome: Mortality, renal failure, need for renal replacement therapy, cost, length of stay
Search Strategy:
No formal search strategy was used. Articles were chosen by Drs. Louis Jamtgaard and Evan Schwarz, in part based on research done for a recent article by Dr. Schwarz in EPMonthly.
Bottom Line:
Normal saline has long been the “go to” fluid of choice for resuscitation in the ED for critically ill patients. However, the use of such “chloride rich” or “unbalanced” fluids has been controversial for decades, with many calling for the use of fluids that more closely resemble the tonicity of human blood. For example, aggressive resuscitation with isotonic saline has been shown to decrease serum pH, without affecting serum osmolality (Williams 1999), and has been suggested to increase the risk of renal dysfunction (Lobo 2014). The clinical significance of these and similar effects has been called into question over the last decade. We sought to evaluate the evidence for and against the use of balanced fluid resuscitation in ED patients, particularly those with severe sepsis or septic shock.
Unfortunately, no articles addressing fluid choice in emergency department patients were identified. All four articles were conducted in the ICU, making it difficult to generalize the results to our patient population. The is especially true in two of the papers: in one of these only 22% of patients were admitted from the ED, half were post-operative, and nearly a third were admitted following elective surgery (Yunos 2012); in the other, patients were noted to be primarily post-operative, and the majority of these were following elective surgery (Young 2015). Neither of these studies demonstrated a reduction in acute kidney injury (AKI), need for renal replacement therapy, or mortality with the use of balanced fluids. While both studies were prospective, and the study by Yunos et al was randomized, the patients were overall less sick than patients traditionally admitted to the ICU from the ED, making it difficult to apply these results to our patient population (external validity).
Two less methodologically robust, retrospective studies were identified, that at least included patients more similar to those in our setting (Rhagunathan 2014, Rhagunathan 2015). These two studies were conducted using the same retrospective cohort of patients from 360 US ICUs with sepsis requiring vasopressor therapy. Unfortunately, as these were retrospective studies, the two treatment groups were unbalanced, and statistical methods had to be employed to balance the two cohorts. In the first of these studies, patients receiving any amount of balanced fluid were propensity matched to patients receiving only unbalanced fluids during the same time period. Patients who received some balanced fluids saw a decrease in in-hospital mortality (RR 0.86, 95% CI 0.78-0.94; NNT 31) with no difference in AKI or need for dialysis. A dose-response relationship was also observed, in which the relative risk of in-hospital mortality was lowered an additional 3.4% on average for every 10% increase in in the proportion of balanced fluids received. In the second study, multiple methods were used to adjust for baseline differences between groups, including propensity matching, inverse probability weighting, and logistic regression. Regardless of the method used, the use of balanced fluids was associated with a decrease in in-hospital mortality.
Unfortunately, there are no prospective, randomized trials in ED patients assessing the efficacy of balanced fluids in resuscitation. While there are theoretical benefits, none of these has yet been proven. The current data consists of a fairly methodologically sound, randomized trial and a before and after study, each conducted with a relatively healthy cohort of patients that seem quite disparate from critically ill patients in the ED, and two retrospective observational trials of septic ICU patients in which patients were treated at clinician discretion. While these two latter trials utilized statistical methods to attempt to balance known confounders between the groups, these methods do not replace randomization when evaluating their methodological rigor. The current evidence is fair at best, but with varying outcomes, making it difficult to provide a clear recommendation. Further prospective studies should be conducted using critically ill ED patients, in order to help us make better decisions regarding our fluid administration options, particularly in those patients undergoing large-volume resuscitation. Given the relatively few downsides to balanced fluid administration, it seems reasonable to opt for lactated ringer’s or Hartmann’s solution when administered a large volume of fluid to patient, particularly when such a patient may already be acidotic.
D-Dimer in the Diagnosis of Aortic Dissection
Nov 30, 2015
Journal Club Podcast #26: October 2015
In which I sit down and discuss the use of D-Dimer to rule-out acute aortic dissection in low-risk patients...
You are moonlighting in a local community ED one afternoon—making serious bank and contemplating what to spend on that money on—when you encounter a forty-year old male patient with chest pain. He has a history of hypertension for which he takes lisinopril, but is otherwise completely healthy. He was watching TV earlier in the day, eating nachos, when he developed a fairly abrupt onset of dull pain in the center of his chest.
As you continue your HPI, you learn that the pain does, in fact, radiate into his back, but is not ripping or tearing. He has no neurologic deficits, has symmetric blood pressures in both arms, and has stable vitals (BP130/85, HR 79, RR 12, SpO2 98%).
You order and ECG, CXR, cardiac enzymes, a BMP, and CBC, all of which come back normal.
You are still considering an acute aortic dissection as part of your differential, given the radiation of pain to the back, but would prefer not to expose your patient to the risks of radiation and contrast unless absolutely necessary. Your suspicion is fairly low. You wonder if a D-dimer would be a reasonable test to rule aortic dissection in this patient. In the end, you get a CT scan, which is negative for dissection, and admit the patient for a stress test. When you get off work, you decide to search the evidence and see what you can find.
PICO Question:
Population: Adult ED patients with suspected aortic dissection
MedLine was searched via PubMED, using the strategy “D-dimer AND aortic dissection” resulting in 117 citations (http://tinyurl.com/omohgdl). Of these, one meta-analysis and 3 primary research articles were selected for inclusion.
Bottom Line:
Aortic dissection, while a relatively rare disease, carries with it a high mortality rate. Mortality for type A dissection is around 25% when managed with surgery, and this rate increases to nearly 60% when managed non-operatively (Hagan 2000). Mortality has been shown to increase substantially for every hour treatment is delayed (Mészáros 2000), making prompt diagnosis and management a key to a good outcome. Diagnosis of aortic dissection typically requires CT angiography, MRI, or trans-esophageal echocardiography (TEE). MRI and TEE are both time-consuming modalities, and TEE is typically quite invasive. While CT is relatively quick and noninvasive, it carries risks associated with both radiation exposure and with the administration of iodinated contrast dye. Therefore the possibility of a noninvasive, rapid screening laboratory test to rule-out aortic dissection is quite appealing.
D-dimer has been proven to accurately rule-out pulmonary embolism, but only when used properly. Low and possibly moderate-risk patients, as determined by a clinical decision rule such as the modified Well’s score or the Geneva score, with a negative D-dimer are felt to be at sufficiently low risk of PE that further testing is not needed (ACEP Clinical Policy).
D-dimer has also been proposed as a means of ruling out aortic dissection. A prior journal club on this topic found that D-dimer had a low negative likelihood ratio (0.06) and negative predictive value, but concluded that there was an absence of clinical decision rules (CDR’s) to determine pre-test probability, and hence to determine which patients could be accurately ruled out for dissection with a negative D-dimer alone. We therefore sought to determine if such CDR’s existed, and if an algorithm could be devised to used D-dimer to exclude aortic dissection in a subset of patients.
The aortic dissection detection (ADD) risk score, introduced in the American Heart Association 2010 guidelines, is a prospectively validated CDR, in which low-risk patients (ADD score = 0) have been shown to have a risk of disease of around 6%. Combined with a negative likelihood ratio of 0.05, as demonstrated in a recently published meta-analysis from the Annals of Emergency Medicine, a patient with an ADD score of 0 and a negative D-dimer would have a post-probability of disease of 0.3%. Of note, a large, prospective, multicenter trial found a similar negative likelihood ratio (0.07). A retrospective study out of Germany sought to evaluate this approach, and found that out of 376 patients being evaluated for aortic dissection, 127 (34%) had an ADD score of 0 and a negative D-dimer . None of these patients were found to have an aortic dissection. In a similar study from Italy, a prospectively collected registry was used to retrospectively evaluate the ADD score combined with D-dimer testing. Out of 1035 patients being evaluated for aortic dissection with a D-dimer level available, 92 (8.9%) had both an ADD score of 0 and a negative D-dimer, and none of these patients were found to have an aortic dissection. Among 152 patients with an ADD score of 1 and a negative D-dimer, only 2 (0.2%) were found to have an aortic dissection.
While these data are promising, suggesting that patients with an ADD score of 0 (and possibly 1) and a negative D-dimer are at extremely low risk of having an aortic dissection, some issues remain. To date, no prospective studies have evaluated the accuracy or safety of this approach, and should be conducted prior to more widespread implementation. Additionally, the test threshold—the probability of disease below which further testing is likely to cause more harm than benefit, but above which further testing is warranted—should be calculated in order to determine the cut-off at which we can safely exclude patients with a negative D-dimer alone. The Pauker and Kassirer method for calculating test and treatment thresholds is shown in figure 1, and relies on knowing the diagnostic test characteristics of more definitive testing, and the risks associated with the test, and the risks associated with treatment of the disease. Once this test threshold is known, and once the combined use of the ADD score and negative D-dimer has been prospectively shown to identify a subset of patients at sufficiently low risk of aortic dissection, an algorithm can be set in place to reliably exclude the disease without the risks of more definitive testing.
Intranasal Midazolam for Pediatric Status Epilepticus
Oct 22, 2015
Journal Club Podcast #25: September 2015
Drs. Katie Leonard and Indi Trehan join me from Children's Hospital to talk intranasal drugs and pediatric seizures...
A 3 year-old male presents to the ED with his parents with a seizure. The patient has had 2 prior febrile seizures in the past. Parents note that he has been seizing for approximately 10 minutes. The triage nurse notes that the patient is actively seizing with generalized-tonic movements of all extremities, eyes rolled back, teeth clenched and unresponsive with peri-oral cyanosis. The patient is brought immediately back to a room.
His initial vital signs are: HR 144, BP 110/67, RR 14, SpO2 87% on RA. You provide oxygen for the patient with a 100% O2 by non-breather and optimize airway positioning. You order 0.1 mg/kg of IV lorazepam per the SLCH status epilepticus management guidelines.
2 nurses quickly attempt to place an IV and fail to obtain access x 3 attempts. You send someone to get the IO kit and ask for diastat (rectal diazepam). The nurse asks you how much Diastat you want to give and while you are trying to remember the age-based recommendations for dosing, you think about if there are any other options for delivery of a benzodiazepine without IV access. You remember that sometimes we give intranasal versed (midazolam) for sedation and wonder if that would work for seizures and be an acceptable alternative to Diastat.
You know that the effectiveness of anticonvulsants to terminate seizures decreases rapidly as the time between start of convulsions and drug administration lengthens, so you want to hasten your delivery of an anticonvulsant that works!
PICO Question:
Population: Pediatric patients in status epilepticus
Intervention: Intranasal midazolam
Comparison: Rectal diazepam
Outcome: Resolution of seizure activity, return to normal mental status, ED length of stay
Search Strategy:
MEDLINE was searched via PubMed using the terms “(Seizures OR status epilepticus) AND intranasal AND rectal” (tinyurl.com/puvvqx2). This resulted in 43 citations, from which the four most relevant articles were selected.
Bottom Line:
Rectal diazepam is commonly used in the management of pediatric seizures at home and by EMS personnel. Given issues with administration and dosing, some EMS systems have moved to alternative routes of benzodiazepine administration, including intranasal administration of midazolam (Holsti 2007).
A handful of studies have directly compared rectal diazepam to intranasal midazolam for the treatment of pediatric seizures. One hospital-based study from Israel demonstrated a decrease in the time from hospital arrival to treatment and faster seizure control with intranasal midazolam compared to rectal diazepam in children with febrile seizures (Lahat 2000), with no adverse events in either group. In a similar study from India involving both febrile and afebrile seizure patients, there was a mean decrease of around 18 seconds in the time to prepare and administer intranasal midazolam compared with rectal diazepam, and greater than 60 second mean decrease in time between drug administration and cessation of seizure activity (Bhattacharyya 2006); seizure cessation was more likely to have occurred within 10 minutes of drug administration in the intranasal midazolam group (RR 1.1, 95% CO 1.0 to 1.2). Finally, a Turkish study demonstrated a similar decrease in the rate of seizure cessation within 10 minutes of drug administration with intranasal midazolam (RR 1.5, 95% CI 1.0-2.2), along with a decrease in overall seizure duration and a decrease in the need for additional drug administration (Fisgin 2002).
These three foreign studies were all limited by small sample sizes (n = 52, 188, and 55, respectively), poor randomization technique, lack of blinding, lack of sample size calculation, and lack of primary outcome identification. In addition, there were several patient characteristics (such as a large proportion of patients with a history of birth asphyxia in Bhattacharyya et al) making it difficult to generalize the results to children in the US (external validity).
A single US study was identified from Salt Lake City that compared intranasal midazolam with rectal diazepam in the prehospital setting (Holsti 2007). Patients enrolled after a change in EMS protocol that mandated seizure treatment with intranasal midazolam were compared to historical controls. Patients receiving intranasal midazolam had a decrease in the duration of EMS-witness seizure activity (median 30 minutes vs. 11 minutes, p = 0.003), and total seizure time (median 45 minutes vs. 25 minutes, p < 0.001). Intranasal midazolam also led to lower overall hospital charges, decreased need for bag-valve mask ventilation, decreased need for intubation, and decreased need for hospital admission. This study was also limited by a small sample size (n = 57), as well as by a before-and-after study design, lack of randomization and blinding, and missing outcome data from the EMS records.
Overall, the evidence in support of the use of intranasal midazolam over rectal diazepam is limited and of poor quality, composed primarily of small, unblinded studies conducted in dissimilar populations with poor methodology. However, the evidence at least supports the premise that intranasal midazolam is safe and effective. It is reasonable, in light of this evidence, to use intranasal midazolam as an alternative to rectal diazepam in children with seizure activity, and may replace rectal diazepam for home treatment given its ease of administration.
Proton Pump Inhibitors in Upper GI Bleeding
Sep 13, 2015
Journal Club Podcast #24: August 2015
Ramblings and musings about the benefits of proton pump inhibitors in the management of acute upper GI bleeds...
You're working a shift in EM-2 one day when you pick up a patient with the chief complaint of "bloody emesis." The patient is a 45 year old male with a history of chronic low back pain who takes daily naproxen (250 mg BID). His pain worsened two months ago when he was in an MVC, and he has been taking ibuprofen in addition to the naproxen. He began noticing epigastric abdominal discomfort 3 weeks ago, and last night began vomiting up coffee-ground emesis. This morning he had one episode of bright right emesis and decided to come to the ED.
He is hemodynamically stable, appears to be in no distress, and is not actively vomiting. He has mild epigastric abdominal tenderness, clear lungs, normal heart sounds without tachycardia, and his conjunctiva and skin do not appear to be pale. His hemoglobin is 13 and his PT/INR and PTT are normal. An NG tube is placed and returns coffee grounds with streaks of blood that do not clear after lavage with one liter of normal saline. You order a type and screen and cross him for 4 units (just in case).
You call the GI fellow, thinking that the patient probably needs an upper endoscopy. The GI fellow asks that you start the patient on a continuous infusion of intravenous pantoprazole. You wonder how useful IV proton pump inhibitors actually are in the setting of upper GI bleeds, and whether a continuous infusion (which ties up an IV and is rather expensive) is really necessary. The patient is admitted to the ICU with plans for an EGD later in the day. After you leave your shift you decide to look and see if there is any evidence out there on the use of PPIs in the management of acute upper GI hemorrhage.
PICO Question:
Population: Adult patients with undifferentiated acute upper GI bleeding
Intervention: Proton pump inhibitor (either IV or PO)
Comparison: Placebo
Outcome: Mortality, rebleeding rates, need for surgery, need for blood transfusion, ICU length of stay, hospital length of stay
Search Strategy:
A search of the Cochrane database using the term “proton pump inhibitor” resulted in a relevant meta-analysis from 2010. A search of PubMed using the strategy ("proton pump inhibitor" OR omeprazole OR pantoprazole OR esomeprazole OR lansoprazole) AND (bleed OR bleeding OR hemorrhage), limited to clinical trials in the last 5 years, yielded no additional relevant articles (http://tinyurl.com/q32oyaf). Three articles included in the Cochrane review were therefore chosen.
Much of this debate stems from a Cochrane systematic review and meta-analysis published on the subject in 2006, and later revised in 2010. In this systematic review, the authors found no effect on mortality (OR 1.12, 95% CI 0.75 to 1.68), rates of rebleeding (OR 0.81, 95% CI 0.62 to 1.06), the need for surgery (OR 0.90, 95% CI 0.65 to 1.25), or the need for a transfusion (OR 0.95, 95% CI 0.78 to 1.16). Despite this lack of a reduction in any patient-important outcomes, they did find a significant reduction in the stigmata of recent hemorrhage at endoscopy (OR 0.67, 95% CI 0.54 to 0.84). The main argument, therefore, stems from the acceptance of certain surrogate outcomes in place of patient-important outcomes.
In fact, the articles referenced in the meta-analysis tended to find similar results individually. Daneshmend et al (1992) found no difference in mortality, rebleeding, or need for transfusion, but noted a decrease in signs of bleeding at endoscopy. Hawkey et al (2001) found no reduction in death or rebleeding, but found a significant reduction in the presence of blood in the stomach at endoscopy. Finally, Lau et al (2007) found no difference in death or need for surgery, but did find a reduction in the need for treatment at the time of endoscopy (though all patients underwent endoscopy).
Surrogate outcomes have long been used in clinical research, offering several advantages such as lower cost, smaller sample size, and shorter follow-up. However, caution must be used when employing such outcomes. A validation is required before correlating an improvement in a surrogate outcome with an improvement in a patient-centered outcome. Complex statistical formulae have been proposed to make such a validation, such as those proposed by Prentice in 1987, but even these criteria have been called into question (Berger 2004). Equally important is the issue of determining the effect size for a patient-important outcome by using a surrogate outcome, even when the two have been shown to correlate. It has been shown, for example, that the use of surrogate outcomes tends to result in an overestimation of effect size when compared to studies using patient-important outcomes.
By demonstrating an improvement in surrogate outcomes without improvements in patient-important outcomes, the authors of these studies and of the Cochrane review all draw similar conclusions: that the data is limited but suggest that there may be a benefit to using a proton pump inhibitor for upper GI bleeds. The American College of Gastroenterology draws a similar conclusion in its guideline: “Pre-endoscopic intravenous proton pump inhibitor (PPI)…may be considered to decrease the proportion of patients who have higher risk stigmata of hemorrhage at endoscopy and who receive endoscopic therapy. However, PPIs do not improve clinical outcomes such as further bleeding, surgery, or death (Conditional recommendation, high-quality evidence).” This recommendation seems reasonable, and given the few downsides to proton pump inhibitors, their administration is likely worthwhile, as long as it does not interfere with other, potentially more important, interventions (i.e. resuscitation of critically ill patients, reversal of anticoagulation, or blood transfusion in symptomatic anemic patients).
Endovascular Therapies in the Management of Acute Stroke
Jun 26, 2015
Journal Club Podcast #23: May 2015
Dr. Peter Panagos, director of neurovascular emergencies at Washington University, joins me to talk about Mr. Clean, Mr. Rescue, ESCAPE, and whole lot of other great acronyms...
A 67 year old white female presents to a moderate sized community hospital (also a Primary Stroke Center) 90 minutes after the onset of right-arm and leg hemiplegia as well as aphasia. The initial NIH Stroke Scale (NIHSS) is 13. A non-contrast head CT is negative, though there is a possible hyperdense "MCA sign" per the preliminary read. All labs are unremarkable and no contraindications to IV thrombolysis were elicited. The local emergency physician and neurologist agree that the patient is a good candidate for IV tPA. The risk and benefits are discussed but the family had some excellent questions. As the expert at the regional Comprehensive Stroke Center, you are contacted for your opinion. The patient’s son happens to be a family practitioner and has heard about some new interventional therapies that have been studied, including several studies recently published in the New England Journal of Emergency Medicine.
The family poses the following questions:
1. Are the new endovascular studies now the new standard of care for acute stroke care?
2. Does the patient really need to get IV tPA locally? Can she just be transported to the regional stroke center for endovascular therapy?
3. How do you know which patients will benefit from endovascular therapy? Can you predict who will do well?
4. Can every stroke center perform this therapy? Why does she need to go to a larger center?
5. How much better are these new therapies compared with standard IV therapy?
PICO Question:
Population: Adult patients with acute ischemic stroke
Intervention: Intra-arterial clot retrieval or intra-arterial thrombolysis
Comparison: Standard of care
Outcome: Functional status, mortality, quality of life
Search Strategy:
No formal search was conducted. Three recent positive articles and one negative article from 2013 were selected by Dr. Peter Panagos, an expert in neurologic emergencies.
Bottom Line:
In 2013, the stroke world was turned on its ear by the publication of three articles - all in the same issue of the New England Journal of Medicine - that showed no benefit to endovascular therapy in the treatment of acute stroke (IMS-III, SYNTHESIS, and MR RESCUE). While some felt this to be the final nail in the coffin for endovascular treatment, others felt that issues with inclusion criteria and treatment strategies may have negated any potential benefit of treatment.
In the SYNTHESIS trial, patients were randomized to either endovascular treatment or intravenous (IV) t-PA, rather than being eligible to receive both therapies. In both SYNTHESIS and IMS-III, patients were selected based solely on clinical data, and no radiologic assessment was made to verify large vessel occlusion. Finally, MR RESCUE excluded patients who were eligible for IV t-PA and also extended eligibility to 8 hours from symptom onset.
As a result of these proposed limitations (lack of proper confirmation of large vessel occlusion using either CT angiogram or MR angiogram, exclusion of patients eligible for IV t-PA, and inclusion of patients with prolonged symptoms) further research was planned and has been conducted to attempt to identify an appropriate subset of patients for endovascular therapy.
Three positive studies were published earlier this year. All three trials included only patients with imaging-confirmed occlusion of a large artery in the anterior cerebral circulation.
The first of these was MR CLEAN, which was conducted in the Netherlands and included patients who could receive endovascular treatment within 6 hours of symptom onset. The result was a favorable shift in the distribution of the modified Rankin scale in favor of endovascular treatment, with an adjusted odds ratio of 1.67 (95% CI 1.21 to 2.30).
The second article, ESCAPE, placed an emphasis on “fast treatment times and efficient work-flow.” This trial was stopped early for a perceived benefit, demonstrating a common odds ratio of 2.6 (95% CI 1.7-3.8) for an improvement of 1 point on the modified Rankin scale, favoring the intervention.
The third trial, SWIFT PRIME, was also stopped early for perceived benefit. Patients were eligible if endovascular therapy could be initiated within 6 hours of the time they were last seen normal. This study demonstrated a shift towards more favorable outcomes on the modified Rankin scale associated with endovascular treatment (p < 0.001). The calculated number needed to treat to have one patient have a "less-disabled outcome" was 2.6.
Two additional studies have been published in the last year: REVASCAT and EXTEND-IA. These studies also required imaging-confirmed large proximal vessel occlusion in the anterior circulation for enrollment. Both studies were also stopped early for perceived benefit, and both demonstrated improved functional outcomes at 90 days.
Finally, two additional studies have been completed, but have not yet been published. The THRACE trial had positive results, while the THERAPY trial resulted in clinical equipoise. Both of these trials were halted early based on the results of MR CLEAN.
Despite the rousing success of these more recent trials, there is still a great deal of controversy surrounding endovascular treatment. Some argue that the prior negative results simply can not be ignored, pointing out that previous negative studies raise the bar for statistical success in subsequent trials. Others have argued that the use of an ordinal analysis of the modified Rankin scale in many of these subsequent studies raises the possibility of erroneously detecting a benefit when one does not truly exist, given the limited reliability of the scale. Finally, two of these more recent studies were stopped early for benefit at unplanned interim analyses, raising the possibility that benefit was detected that would not have been found had the studies been completed.
On the other hand, proponents of endovascular therapy point out that the more recent studies used more strict inclusion criteria and protocols - including more timely intervention - than the original studies. It should also be pointed out that the largest of these trials (MR CLEAN) was not stopped early. Finally, ordinal analysis has become a common statistical tool in stroke trials, and has been accepted as a means of identifying smaller (but clinically significant) differences in outcomes than using a dichotomous cutoff for the modified Rankin scale.
Both sides in the debate over endovascular therapy for stroke make salient points. The current tide, however, seems to be in favor of its proponents. Many institutions have begun to perform endovascular procedures, with strict protocols guiding eligibility, including imaging-confirmation of large vessel occlusion. This practice will have far-reaching implications, including an impact on EMS transport and inter-hospital transfer to stroke centers. The data will need to be carefully considered, not only with respect to the treatment of individual patients, but also the broader impact on our healthcare system.
Routine Laboratory Screen in Psychiatric Patients in the ED
May 08, 2015
Journal Club Podcast #21: April 2015
I contemplate the necessity of routine laboratory testing for psychiatric patients presenting to the emergency department...
You are working a shift in EM-1 (i.e. the corner of pain) one typical Friday the 13th. The moon is full and so are rooms 5-12. Three new psychiatric patients are brought in by EMS. The first is a 23 year-old female with a known history of schizophrenia, medication noncompliance, and chronic cocaine abuse who was found wandering the street acting paranoid and delusional. Her vital signs are normal. The second is a sixty year male whose wife passed away one year ago who has been feeling sad and depressed ever since. He states he wants to kill himself and owns a gun. He denies alcohol or drug use and does not appear intoxicated. His vitals are completely normal. The third is a thirty-five year old male with no known psychiatric medical history. His wife called EMS because he has been behaving more bizarrely for the last week, claiming the FBI was out to get him, claiming their children were aliens, and claiming she was the anti-Christ. He had barricaded himself in his room and the police had to assist in his extraction. He is tachycardic, but otherwise has normal vitals.
You know that you will need a psychiatric consult on all of these patients and start to order the "typical" labs, when you stop and wonder if this is really the best practice. You have three very different patients with very different differentials, and you wonder if the cookie-cutter method is really the best. You ask yourself if there is any literature to guide your decision making, specifically if there is any evidence that routine lab testing is beneficial in psychiatric ED patients. The next day you decide to perform a literature search and find...
PICO Question:
Population: Pediatric and adult patients presenting to the ED for evaluation of psychiatric chief complaints
Intervention: Routine laboratory screening
Comparison: Laboratory testing based on clinician discretion
Outcome: Disposition, psychiatric care, change in medical management, cost, length of stay
Search Strategy:
A PubMed search was conducted using the terms ((emergency) AND ((psychiatry) OR psychiatric)) AND ((routine) OR screening) in the title or abstract, limited to humans (http://tinyurl.com/kyo95e5). This resulted in 252 articles, from which 4 were chosen for inclusion.
Bottom Line:
Patients presenting to the emergency department (ED) with psychiatric complaints typically undergo a process of medical clearance prior to evaluation by a psychiatrist or psychiatric intake nurse. The nature of this “medical clearance” is seldom standardized. In addition to a history and physical exam, laboratory testing is often performed. What additional laboratory testing should be performed, if any, is a highly controversial topic and often varies from state to state and hospital to hospital. A retrospective analysis of adult psychiatric patients presenting to ED’s in Rhode Island demonstrated there was significant variability in the number tests performed at each hospital. The Illinois Department of Human Services specifically requires all patients to have blood counts, electrolytes, a pregnancy test, and a drug screen performed for medical clearance, while the state of New Jersey specifies that “diagnostic testing should be conducted based upon the emergency provider’s determination of need.”
The American College of Emergency Physicians Clinical Policy on the management of psychiatric patients in the ED recommends that laboratory testing be “directed by the history and physical examination,” noting that routine testing is “very low yield.” The results of our journal club support these recommendations for both adult and pediatric patients. A retrospective study of patients admitted to the Medical College of Georgia demonstrated only one significant laboratory abnormality that would change ED intervention or disposition out of 519 cases reviewed. This abnormality involved a patients with abnormal vital signs and a significant past medical history, and should have been expected based on the history and physical exam. A similar study in Bakersfield, CA demonstrated that only 4 patients without significant findings on the history and physical exam had a laboratory abnormality requiring a change in management. In all 4 cases, the lab abnormality was a positive urinalysis suggesting infection, and disposition was not altered in any of the cases.
In a randomized controlled trial conducted at San Francisco General Hospital, patients referred to the emergency psychiatric service were randomized to either mandatory drug screening or screening at physician discretion. Mandatory drug screening was found to have no impact on disposition or referral for substance abuse treatment. This study suggest that clinician discretion, rather than protocol, should dictate whether a urine drug screen is necessary for patients being evaluated by psychiatry in the ED.
Similar results to those discussed for adult patients have also been demonstrated in the pediatric population. In a retrospective chart review of pediatric patients presenting to a large inner city ED in California, lab testing resulted in a change in disposition in less than 1% of cases. In all but one of these cases, there was an associated abnormality in the history or physical exam that predicted the lab abnormality. In the other case, the abnormality was a positive pregnancy test, resulting in admission to the medical service where no pregnancy-related interventions were required. In another 6% of cases, there was a lab abnormality that required a change in management. In half of these cases, there was an associated abnormality in the history or physical exam; in the the other half, the abnormality involved a non-urgent change in medical management. The ED length of stay was significantly shorter in patients who did not undergo blood testing and in those who required no screening labs at all.
These data suggest that in alert, cooperative patients - both adult and pediatric, -routine laboratory testing for psychiatric complaints in the ED is unnecessary. Testing should instead be directed by the history and physical exam. Urine drug testing, while often useful in the long-term management of patients, does not appear to alter the acute disposition of patients, and should not delay psychiatric consultation or disposition from the ED.
Albumin for Patients with Spontaneous Bacterial Peritonitis or Large Volume Paracentesis
Apr 17, 2015
Journal Club Podcast #21: March 2015
A few of the residents join me to discuss the benefits of albumin administration for patients with SBP or those undergoing large volume paracentesis...
You are caring for a fifty-year old gentleman with a history of non-alcoholic steatohepatitis with cirrhosis who presents to the emergency department (ED) with increased abdominal distension, shortness of breath, and fevers. His abdomen is distended and tense with mild diffuse tenderness. His temp is 38.7 C, BP is 103/60, HR is 89, and SpO2 is 99% on room air. You check his labs and find a WBC of 13.4, baseline anemia, a creatinine of 1.2, and an INR of 1.4. His chest x-ray reveals small lung volumes without infiltrates or pulmonary edema. A bedside ultrasound reveals a large amount of ascites. You decide to perform both a therapeutic paracentesis to relieve pressure on his diaphragm and improve his respiratory complaints, and send fluid to the lab for cell count, differential, gram stain, and culture given your concern for spontaneous bacterial peritonitis (SBP).
You manage to drain just over 8 liters of fluid, for which the patient is quite thankful. The fluid gram stain reveals abundant polymorphonuclear cells (PMNs) with no organism seen. The cell count reveals 15,000 nucleated cells, of which 89% are PMNs. You feel confident that the patient has SBP and order a dose of cefotaxime. You are admitting the patient to the medicine service when the resident suggests that you administer albumin. She tells you that the American Association for the Study of Liver Diseases (AASLD) recommends albumin administration both following large-volume paracentesis and in the management of select patients with SBP. You follow the resident’s recommendations, but are curious what the evidence actually shows. After your shift you decide to begin your literature search.
PICO Question:
#1
Population: Adult patients with cirrhosis and tense ascites undergoing large volume paracentesis
Two MEDLINE searches were conducted via PubMed. A search was performed using the terms albumin AND "spontaneous bacterial peritonitis" which resulted in 230 citations (http://tinyurl.com/l2zy2se). Of these, one systematic review and one large randomized controlled trial were chosen. An additional search was conducted using the terms albumin and paracentesis (http://tinyurl.com/kfau542). This resulted in 414 citations, from which two systematic reviews were chosen.
Bottom Line:
Ascites is one of the many complications associated with hepatic cirrhosis, and is associated with a poor prognosis (D’Amico 2006). Ascitic fluid can accumulate to the extent that it impairs functional status, and current guidelines recommend a large volume paracentesis for patients with tense ascites. When such large volumes of ascitic fluid are removed, fluid shifts and a decreased systemic vascular resistance can potentially lead to circulatory dysfunction, hyponatremia, and renal impairment (Lindsay 2014). The administration of intravenous albumin can theoretically reduce the risk of these complications, though this practice remains controversial (Manzocchi 2012, Caraceni 2013).
Two systematic reviews and meta-analyses published on the use of albumin following large volume paracentesis found similar results (Bernardi 2012, Kwok 2013). The use of albumin was shown to significantly reduce the risk of circulatory dysfunction, with a number needed to treat (NNT) of 2, and the risk of hyponatremia, with a NNT of 8. For these outcomes, albumin was shown to outperform other volume expanders as well. Albumin was not, however, shown to reduce mortality, renal impairment, ascites recurrence, or hospital readmission. While this evidence suggest some benefit to albumin administration, the two outcomes for which albumin demonstrated an improvement are of unclear clinical relevance. As a result, it is difficult make a strong recommendation either for or against albumin administration in patients undergoing large volume paracentesis. The current recommendation from the American Association for the Study of Liver Disease (AASLD) is to consider the administration of albumin (6-8 g/L of fluid removed) for patients undergoing removal of greater than 5 liters. This recommendation is appropriately given a low grade (IIa/C).
With regards to albumin administration in patients with SBP, the evidence is more compelling. A meta-analysis of 4 randomized controlled trials comprising 288 patients found significant reductions in both the risk of renal impairment (OR 0.21, 95% CI 0.11-0.42) and mortality (OR 0.34, 95% CI 0.19-0.60) with NNTs of 4 and 5, respectively. The included studies were, admittedly, of only moderate quality, with only one of them being blinded. The largest of these trials (Sort 1999), which included nearly half of the patients in the meta-analysis, independently demonstrated significant reductions in renal impairment (OR 0.21) and both in-hospital and 90-day mortality (ORs of 0.26 and 0.41, respectively). This study was not blinded, and more importantly was limited by a difference in baseline bilirubin levels between the two group: the mean bilirubin in the control group was 6 ± 1 compared to 4 ± 1 in the group that received albumin. This difference is important, as the study demonstrated on multivariate logistic regression that elevated bilirubin levels were independently predictive of a higher risk of both renal impairment and death.
Despite this limitation, the AASLD recommendation is to administer albumin (1.5 g/kg within 6 hours of diagnosis of SBP followed by 1.0 g/kg on day 3) in patients with a serum creatinine > 1 mg/dL OR BUN > 30 mg/dL OR bilirubin > 4 mg/dL. It should be noted that the limited administration based on these laboratory abnormalities is based on a single observational cohort study, and seems somewhat arbitrary.
Both therapeutic and diagnostic paracentesis are common procedures in emergency medicine, and the diagnosis and initial management of SBP fall well within our practice parameters. Given the increased boarding times observed in many EDs, it is prudent that the emergency physician be aware of treatment modalities that require initiation within the first several hours of patient care. As a result, it seems reasonable to begin the administration of albumin to patients with SBP concomitantly with antibiotics while the patient is still in the ED, as this has been shown to decrease the risk of both renal impairment and mortality. It is also reasonable to consider albumin infusion in patients undergoing large volume paracentesis (more than 5 liters of ascitic fluid removed), though the evidence in support of this is much less compelling.
The Diagnostic Evaluation of Renal Colic
Mar 28, 2015
Journal Club Podcast #20: February 2015
Dr. Chandra Aubin joins me to talk about the using ultrasound, CT, and sometimes plain old common sense to diagnose those kidney stones...
You are a fourth-year resident working in TCC one overnight when you (not the patient) begin experiencing pain in your right flank that radiates into your right lower abdomen and groin. The pain is colicky and intermittent and increases in severity to the point that you are in tears. You go and tell your attending, Dr. Wagner, who responds “at least I didn’t make you cry this time.” He tells you to get back to work and stop complaining, but after another thirty minutes the pain is so intense that you are no longer perform your duties as a physician. The back-up person is called in and you are placed in a treatment room to be seen as a patient.
The intern enters the room and notes that you are writhing in pain and are unable to sit still. Your vitals are stable. You have mild right CVA tenderness, no abdominal tenderness, and a normal genital exam (for someone of your gender). Your lungs are clear and other than tachycardia your cardiac exam is unremarkable.
The intern clearly believed that you are suffering from renal colic and wishes to practice their “shared decision-making” skills with you and begins a discussion of the diagnostic options. The options include the following:
1. Check your labs, including a urinalysis (UA), and practice expectant management;
2. The intern can personally perform a bedside ultrasound (US) of your kidneys to evaluate for hydronephrosis;
3. The ED attending can perform a bedside US;
4. You can wait for a formal US to be performed in the morning;
5. A computed tomography (CT) scan can be ordered.
You consider your options while awaiting the IV dilaudid you requested, weighing the risks of radiation exposure, the delay in obtaining a formal US, and the potential loss of accuracy with a bedside ultrasound. You wish you could get on the computer to perform a literature search to find the best option, but choose instead to black out from the pain, welcoming the darkness that enfolds you…
PICO Question:
Population: Adult patients with presumed ureteral colic
Outcome: Diagnostic accuracy, missed alternative diagnoses, need for urologic intervention
Search Strategy:
Having seen a published study in the New England Journal of Medicine that your institution participated in, you choose to include this article (Smith-Bindman 2014). Having such a broad topic, you ask one of your ultrasound experts for additional articles. She sends you a bevy of relevant articles, from which you select an additional three titles to include.
Bottom Line:
Ureteral colic remains a common diagnosis in emergency departments in the United States, where it accounts for over 2 million annual visits. Computed tomography (CT) as a diagnostic imaging modality has seen a dramatic increase in use for the evaluation of ureteral colic, despite growing concerns regarding exposure to ionizing radiation. Ultrasound, on the other hand, has seen very little change in its use in the US, being used in ~6.9% of cases of suspected ureteral colic in 2008. This is in stark contrast to care in Canada, where ultrasound is the preferred imaging modality, being utilized in ~70% of patients who undergo imaging.
One observational study conducted in Ontario, CA evaluated the utility of US in renal colic (Edmonds 2010). There were 817 ED-ordered renal ultrasounds for the evaluation of kidney stone. A normal ultrasound in these patients was associated with a decrease in the need to undergo a CT scan, and correctly identified a population at very low risk of requiring a urologic intervention. In patients with a normal ultrasound, 14% underwent CT scanning within the next 90 days, and one 0.6% required a urologic procedure. Among those with either a visualized stone or indirect evidence of a stone, around 30% and 18% underwent CT scanning, respectively, and around 6% in each group required a urologic procedure.
When ultrasound is used as an initial diagnostic imaging modality, it results in a significant decrease in cumulative radiation exposure without an increase in serious adverse events. In one multicenter US study (Smith-Bindman 2014), patients were randomized to an initial imaging modality of either CT scan, bedside ultrasonography performed by an emergency physician, or ultrasonography performed by a radiologist. The mean cumulative radiation dose over the next months was lower in the bedside and radiologist-performed ultrasound groups than in the CT group (10.1 mSv and 9.3 mSv, respectively, vs. 17.2 mSv). The rate of adverse events in the three groups was 12.4%, 10.8%, and 11.2%. The median ED length of stay was found to be lower in the bedside ultrasound group compared to the other two groups. These results suggest that use of ultrasound as an initial imaging modality is both safe and results in decreased radiation exposure.
Bedside ultrasound imaging use by emergency physicians has continued to increase, and its use has become a requirement for graduating emergency medicine residents. However, proficiency with ultrasound varies greatly with level of training and experience. The effect of level of training on the diagnostic accuracy of ultrasound in suspected renal colic has been evaluated in a prospective observational study (Herbst 2014). For the detection of hydronephrosis, using CT as the gold standard, diagnostic accuracy of bedside ultrasound was fair in the hands of fellowship trained emergency physicians (LR+ 4.97) but was poor in the hands of non-fellowship trained attendings, experienced residents, and inexperienced clinicians (LR+ 2.78, 2.39, and 2.07, respectively). For the detection of moderate hydronephrosis, the positive LRs were much better (22.52, 8, 4.03, and 4.15). One could argue that the detection of moderate hydronephrosis is much more likely to alter patient management than the detection of any hydronephrosis.
The diagnostic evaluation of ureteral colic has three main goals: 1) confirmation of the presence of an obstructing ureteral stone; 2) evaluation for the presence and degree of hydronephrosis; and 3) exclusion of other potentially serious alternative causes of patients' symptoms. Many would argue that for the majority of patients, this last goal is the main priority when considering a diagnostic imaging modality. The reported diagnostic accuracy of ultrasound varies widely in the literature and is typically better at detecting larger stones (Patlas 2001, Fowler 2002, Ripolles 2004). It could be argued, however, that the goal of ultrasound is not to detect the stone itself, but rather to evaluate for the degree of hydronephrosis, and hence determine the need for intervention. It therefore seems reasonable to utilize ultrasound in patients in whom the probability of an serious alternative has been deemed to be low, reserving CT for those where the concern is higher.
The STONE score is a retrospectively derived and prospectively validated clinical decision rule for evaluating the probability of a patient having renal colic. Based on five factors (sex, duration of symptoms, race, presence of nausea and/or vomiting, and hematuria) a score of 0-13 is assigned. Patients with a score of 0-5 were found to have an 8-9% prevalence of kidney stones, compared to a prevalence of 88-89% in those with a score of 10-13. Patients in the high prevalence category had a low rate of alternative diagnoses found on CT scan (1.6%). It would be reasonable, therefore, to forego CT scanning, and potentially ultrasound as well, in patients with a high stone score, while considering CT for those with a low score. Patients with a moderate score (prevalence of ~50%) could undergo ultrasound as an initial imaging modality.
Epinephrine in Out-of-Hospital Cardiac Arrest
Feb 04, 2015
Journal Club Podcast #19: January 2015
This month I sat down with EMS physician Dr. Bridgette Svancarek and talk all about epinephrine, and whether it really does any good...
You are doing an EMS ride-along during your EMS elective and get a call for a 70-year old male in cardiac arrest. The paramedic hits the lights and sirens and you're on scene in five minutes. The fire department has already arrived and CPR is in progress. They tell you that the patient was watching TV with his wife when he collapsed about 15 minutes prior to their arrival. He did not receive any bystander CPR and was pulseless and apneic on their arrival.
You and the EMS team take over CPR and bag the patient while hooking up the monitor. He is found to be in asystole and the paramedic grabs an amp of epinephrine. You place a supraglottic airway, he gets the epinephrine, and you load him up while continuing good, uninterrupted chest compressions. He gets two more rounds of epi en route and gets a pulse back.
On arrival to the ED he has a pulse, is mildly hypotensive, but has no spontaneous breaths and his pupils are fixed and dilated. You know that giving epinephrine in cardiac arrest is the standard of care, but wonder what effect it really has: does it improve ROSC, and if so does it actually improve neurologic function down the road. You wonder if there is really any evidence to support its use at all. You head to the computer and start searching...
PICO Question:
Population: Adult patients with atraumatic out-of-hospital cardiac arrest
Comparison: Standard CPR without epinephrine administration
Outcome: Return of spontaneous circulation, survival to hospital admission, survival to hospital discharge, survival with a good neurologic outcome
Search Strategy:
A PubMed search was performed using the search terms “(out of hospital cardiac arrest) AND (epinephrine OR adrenaline).” The search was limited to humans, resulting in 207 citations (http://tinyurl.com/lsgy63r). These were searched and two randomized controlled trials, one observational trial, and a systematic review were chosen for inclusion.
Bottom Line:
Epinephrine is currently recommended in the management of out-of-hospital cardiac arrest (OHCA) by both the American Heart Association and the European Resuscitation Council despite a paucity of clear evidence that it improves patient-centered outcomes. This lack of evidence has led some clinicians to question the use of epinephrine in cardiac arrest. The primary proposed benefit of epinephrine has been an increase in coronary perfusion pressure, which has been demonstrated in animal studies. While no placebo-controlled human studies have confirmed these findings, high-dose epinephrine has been shown to increase coronary perfusion to an even greater extent than low-dose epinephrine. However, this dose-response relationship does not necessarily confirm the benefits of epinephrine. While high-dose epinephrine has been shown to improve rates of return of spontaneous circulation (ROSC), it does not improve the more clinically relevant outcome of survival to hospital discharge (Gueugniaud 1998, Vandycke 2000). This may be in part due to reduced microcirculatory cerebral blood flow caused by epinephrine, resulting in worse neurologic outcomes among survivors.
There have been several observational studies evaluating the use of epinephrine in OHCA with varying results (Herlitz 1995, Holmberg 2002, Ohshige 2005, Wang 2005, Ong 2007, Yanagawa 2010). The largest of these studies (Hagihara 2012) prospectively evaluated outcomes in over 400,000 patients in Japan. The authors found that while epinephrine use was associated with a significant increase in ROSC (adjusted odds ratio [AOR] 2.01, 95% CI 1.83-2.21), it was also associated with significant decreases in survival at one month (AOR 0.71, 95% CI 0.62-0.81) and survival with good neurologic function, as defined by a cerebral performance category (CPC) score of 1 or 2 (AOR 0.41, 95% CI 0.33-0.52).
There has been, to date, one randomized controlled trial comparing the use of epinephrine with placebo in the management of OHCA (Jacobs 2011). While this study demonstrated improvements in survival to hospital discharge with the use of epinephrine, this result did not achieve statistical significance (OR 2.2, 95% CI 0.7-6.3). The study was afflicted, unfortunately, by a small sample size and was underpowered to detect a potentially clinically significant improvement in outcomes. While the investigators initially planned to perform a large study involving five ambulance services throughout Australia and New Zealand, all but one service withdrew from the study due to ethical concerns.
There has been an additional randomized controlled study evaluating the effectiveness of intravenous drug administration during cardiac arrest (Olasveengen 2009), of which epinephrine is arguably the most important component. This study also demonstrated higher rates of ROSC among patients with IV access initiated by EMS (OR 1.99, 95% CI 1.48-2.67). However, there was no statistically significant improvement in survival to discharge (OR 1.16, 95% CI 0.74-1.82) or survival with a CPC score of 1 or 2 (OR 1.24, 95% CI 0.77-1.98). There was a large degree of crossover in this study, and the authors chose to perform an “as treated” analysis of the data based on epinephrine administration (Olasveengen 2012). This analysis demonstrated a significant decrease in both survival to discharge (OR 0.5, 95% CI 0.3-0.8) and survival with a CPC score of 1 or 2 (OR 0.4, 955 CI 0.2-0.7) when epinephrine was administered. These results must be viewed cautiously, as the reasons for crossover between the groups likely disrupted the prognostic balance afforded by randomization, leading to a poorer baseline prognosis among patients receiving epinephrine. Note that among 418 patients randomized to receive an IV, 42 did not have IV access initiated because they had ROSC; only 12 patients in this group did not have IV access initiated due to futility. Among 433 patients randomized to have no IV access initiated, 27 received IV access only after having ROSC and then rearresting.
The existing data is clearly limited, and the authors of a systematic review on the subject understandably conclude that “although the results…exhibit the paucity of high quality published research supporting the use of epinephrine in OHCA, there is insufficient evidence to support changing current guidelines” (p. 90). Fortunately, a trial has recently begun in the United Kingdom (PARAMEDIC 2: the Adrenaline Trial), which plans to enroll 8000 patients randomized to either epinephrine or placebo. This trial will hopefully further elucidate the efficacy or harm associated with epinephrine and provide statistically significant outcomes data to solidify or change our current practice.
Geriatric Fall Assessment in the Emergency Department
Jan 12, 2015
Journal Club Podcast #18: November 2014
Chris Carpenter, Mike Galante, and I get together to talk about two things we all do eventually: growing old and falling down...
Mrs. C., an 86-year old female, presents to your academic emergency department (ED) via ambulance after an accidental fall at home. She is a recent widower and lives alone, but she reports that she has two adult “children” that live nearby and check on her every day either in-person or by telephone. About 12-hours prior to ED presentation today she was walking to the bathroom from her bedroom when she tripped over something and fall onto her left side. She notes left wrist and left hip pain, but denies loss of consciousness, headache, chest pain, dyspnea, abdominal pain, or focal weakness. She was unable to lift herself off the floor last night and was found by one of her grandchildren in the morning. EMS reports a well-kept home without any obvious cause for her fall.
Her vitals are BP 160/85, P 60, RR 16, T 37.4°C, and her room air pulse ox is 100%. She is somewhat overweight and in no apparent distress, although she notes that her left hip and wrist hurt to touch or move. You note no contusions or swelling over the hip or wrist, nor does your physical exam demonstrate any other abnormal findings. Her x-rays of the hand/wrist/forearm/pelvis and hip are unremarkable, as is an x-ray of her C-spine. You are not concerned with an occult scaphoid fracture of her wrist. However, due to her persistent left hip pain you order a MRI of her hip/pelvis and no fracture is demonstrated. With adequate opioid analgesia, Mrs. C’s pain is well-controlled and she is able to ambulate with minimal discomfort so you plan discharge home. You notify Mrs. C and her son of these findings and plan, and call the referring physician to notify her. However, her son asks you how likely it is that Mrs. C will fall again and how to prevent future falls. You’ve just heard Dr. Sam Smith’s Washington University lecture about FOAMed and decide to listen to the Skeptics Guide to Emergency Medicine podcast “Falling to Pieces” about geriatric adult fall-risk stratification in the post-ED period. The podcast discusses a prognostic systematic review on this topic, which you decide to read to learn more. The systematic review provides you with a search strategy to find more published research on this topic.
PICO Question:
PICO Question #1
Population: Geriatric patients in the ED
Intervention: Risk-stratification for falls and injurious falls in the months following an episode of ED care
Comparison: None
Outcome: Sensitivity, specificity, likelihood ratios for fall-risk
PICO Question #2
Population: Geriatric patients in the ED
Intervention: Fall-prevention program + post-ED discharge standard care
Comparison: Post-ED discharge standard care
Outcome: Falls or injurious falls in the post-ED discharge period
Search Strategy:
You note that the systematic review provides exhaustive (3-pages!) PUBMED and EMBASE search strategies in Data Supplement 1. You decide to go another direction so you conduct a “broad” search using the prognostic study filter using PUBMED Clinical Query and the search term “geriatric fall*” yielding 5 citations, including the systematic review (see http://tinyurl.com/orjnsdk).You obtain the rest of the PICO Question #1 research manuscripts by reviewing the results and the bibliography of the systematic review. For PICO Question #2, you use the therapy study filter on PUBMED Clinical Query and the search term “fall prevention elderly” yielding 71 citations (see http://tinyurl.com/nkb4x9n).
Bottom Line:
Standing level falls are the number one cause of geriatric trauma-related mortality with 33% of those over age 65-years falling each year (increasing to 50% of those over age 80)! One-in-five falls results in an injury and 44% of individuals with fall-related hospital admissions are readmitted to the hospital within 1-year with 33% 1-year mortality. Fallers represent a population with substantial recurrent healthcare use. A history of falls also predicts post-operative complications in older adults undergoing major elective surgery. Elderly falls frequently precipitate a vicious circle of fear of falling, social isolation, diminished quality of life, and increased short-term mortality. Even those with minor fall-related injuries discharged home from the ED experience recurrent falls, functional decline, and ED returns within 3-months.
· In patients who have fallen, evaluate for precipitating causes of falls such as medications, alcohol use/abuse, gait or balance instability, medical illness, and/or deterioration of medical condition.
· Assess for gait instability in all ambulatory fallers; if present, ensure appropriate disposition and follow-up, including attempts to reach primary care provider.
· Assess and document the presence of comorbid conditions (e.g. pressure ulcers, cognitive status, falls in the past year, ability to walk and transfer, renal function, and social support) and include them in your medical decision-making and plan of care.
In addition, ACEP, AGS, ENA, and SAEM jointly wrote and released the “Geriatric ED Guidelines” in 2014 that includes protocol and quality improvement recommendations for fall management in the ED setting. Unfortunately, emergency physicians rarely evaluate fall risk and older adults who present to the ED for evaluation after a fall rarely receive guideline-directed management (Donaldson 2005, Naughton 2012).
Where is the disconnect between awareness that geriatric falls are a prevalent and clinically important problem? The Knowledge Translation Pipeline (Figure) provides a framework to understand. Awareness by ED providers that geriatric falls is probably not a large “leak” – but awareness that fall-risk screening tools exist, where to find them, and how to interpret them – likely is. The next “leaks” of acceptance and applicability are probably large “leaks” in the pipeline, since healthy skeptics legitimately challenge the quality of ED-based fall research and how valid these findings are for real-world settings. This Journal Club addressed many of those issues.
The PGY-I paper assessed fall-risk factors for community-dwelling geriatric patients presenting to (and discharged home from) one U.S. ED for any reason except a fall. They identified four risk factors (non-healing foot sores, past falls, inability to cut own toenails, self-reported depression – the “Carpenter instrument” – see box) as independently associated with 6-month fall risk, but this model/risk prediction instrument requires validation followed by feasibility and effectiveness testing before widespread use. They also noted that objective tests of gait and balance do not predict 6-month falls in ED populations, but these performance tests are reliable (IntraclassCorrelation Coefficient = 0.95 for distinguishing “normal”, “borderline”, or “abnormal”).
The PGY-II manuscript described the “Tiedemann instrument”, a 2 question fall risk stratification tool. Unfortunately, the Tiedemann instrument does not significantly increase or decrease fall risk on individual patients who have been to ED for a fall-related compliant or with a history of multiple recent falls. As in the PGY-I study, they also found that current objective performance tests (like the timed Get up and Go) in the ED do not accurately predict future falls.
The PGY-III manuscript was a prognostic systematic review from the Academic Emergency Medicine Evidence Based Diagnostics series. This study found that no risk factor (including past falls and objective tests of gait/balance) accurately increase or decrease the risk of falls in the 6-months following ED discharge. In addition, the only ED fall-risk screening instruments that exist in 2014 are the Carpenter and Tiedemann instruments described above. Of these two instruments, a Carpenter score >1 has a negative likelihood ratio of 0.11 (95% CI 0.06-0.20) and most accurately identifies older adults at lower risk of falls. Neither instrument accurately identifies the subset at greater risk of falls. Based upon the sensitivity/specificity (93% and 61%, respectively) of the Carpenter instrument at threshold >1, benefit of fall-risk intervention derived from the PROFET study (20% absolute risk reduction), and hypothesized risk of fall-screening of 0.5% and risk of intervention in patient without fall-risk 2%, the test threshold was estimated at 7% and the treatment threshold at 27%. In other words, continuing to assess fall-risk in patients at less than 7% risk may harm more patients than are helped. Similarly, continuing to assess fall-risk in those with >27% risk may also harm more patients than are helped.
The PGY-IV manuscript evaluated a fall-prevention intervention in England, but their approach was not really ED-based and the resources available to these investigators via the British National Healthcare System are generally unavailable in the U.S. Nonetheless, they found that community-dwelling adults over age 65 who visit the ED following a fall demonstrate reduced fall rates (Number Needed to Treat to prevent one fall = 5) with an intensive medical evaluation and occupational therapy home safety assessment in the weeks following the index fall. Replication of these results necessitates a universal healthcare system where every patient has insurance and a primary care physician, as well as access to a one-stop shopping Day Hospital for multi-disciplinary assessment when indicated based on the medical evaluation. In addition to these system-level requirements, future studies should evaluate real-time ED based interventions, using validated fall-risk screening instruments, and assess frailty, dementia and health literacy as confounding variables.
Although this research fails to provide a definitive fall screening strategy recommended by AGS/BGS Fall Guidelines, ACEP/SAEM Geriatric ED Guidelines, and EM resident core competencies, the status quo is unacceptable and the quantitative summary estimates of fall incidence and risk factor accuracy and reliability provide an evidence basis on which clinicians, nursing leaders, administrators, educators, policy-makers, and researchers can build. Fall prevention (both risk stratification and interventions) in ED settings has been disappointing and largely unsuccessful. No single risk factor significantly increases or decreases the risk of 6-month falls for geriatric ED patients. In one single-center study, the “Carpenter instrument” identified low-risk patients (LR- 0.11), but additional research is needed to reproduce these results and no instrument accurately identifies high-risk patients. The ideal fall risk screening instrument would be accurate and reliable, sufficiently brief for routine ED use by clinicians, nurses, or ancillary staff, and not require equipment that is not routinely available in the average ED.
Accurate assessment of post-ED fall risk also has implications for fall prevention intervention studies. Since fall risk is unlikely to remain static throughout an episode of ED care (or for the 6-months following an episode of ED care), the fluid nature of fall vulnerability suggests that within-ED and post-ED repeat fall-risk assessment is logical (but never studied). In addition, since a one-size-fits-all approach to fall prevention has been largely unsuccessful, more accurate fall-risk stratification could provide feasible, targeted strategies upon which to focus interventions towards the unique fall profile of the individual patient using adaptive clinical trial design.
Several attendees at Journal Club suggested that health-literacy appropriate ED discharge instructions that educate patients about fall-risk factors in the home would be worthwhile. Figure 2 below (from Carpenter CR; Falls & Fall Prevention in the Elderly in Geriatric Emergency Medicine Principles and Practice; Kahn JH, Magauran BH, Olshaker JS (eds); Cambridge Medicine 2014, pages 345-346) Incorporating a version of this figure into our fall discharge instructions is a future resident Quality Improvement project.
1) Can high-risk geriatric fallers who require admission or expedited outpatient evaluation be identified in the ED?
2) Can simple and feasible interventions reduce fall or injurious fall rates after the ED visit?
3) Could rapid response teams or special ED-associated units evaluating geriatric adults at increased risk for recurrent falls reduce fall-related injuries and improve the efficiency of inpatient resource utilization?
4) Can hospital-at-home models for management of high-risk fallers be developed, and what are the characteristics of models that successfully lower falls rates?
5) What are the key elements of electronic information systems that facilitate point-of-care risk stratification and communication of high-risk finds to emergency providers and primary care physicians?
Treat and Release vs. Observation After Naloxone for Opioid Overdose
Nov 24, 2014
Journal Club Podcast #17: October 2014
Evan Schwarz and I sit down to talk about heroin, naloxone, and personal responsibillity...
It's a typical TCC shift when you hear yelling coming from the ambulance bay. You run over to find an apneic patient has just been pushed out of a car that then sped off. As you place him on a gurney and move him into the emergency department (ED), you notice fresh track marks, a recently used syringe on the ground, and drug paraphernalia sticking out of his pockets. You begin to bag him and notice that he has pinpoint pupils and no other signs of trauma. You administer an atomized dose of intranasal naloxone and continue bagging him until he begins to breathe adequately. A few minutes later, the nurse comes to get you to let you know that the patient is now awake and wants to go. You go back to the room and tell him that he needs to stay since you are worried that he will stop breathing once the naloxone wears off. You offer him a turkey sandwich as a compromise to get him to stay and go to see the new patient with respiratory distress.
You have just finished intubating the patient with respiratory distress and are getting ready to place a central line in a patient with severe sepsis when the nurse comes to let you know that your overdose patient is getting belligerent and that you need to do something. Before you even have a chance to calm him down, he tells you that he is leaving now and angrily asks who gave you permission to bring him here, wants to know who is going to give him money to get home, and wants to know why someone filled his pants with ice. You tell him he cannot go yet, and if he doesn't calm down, you will have to call security. After going through this 2 more times, he has now been in the ED for approximately an hour. You offer him chemical dependency information and discharge him. After he leaves, you realize that neither you, the patient, the nursing staff, or the patient sharing a trauma room with him enjoyed this experience and wonder if it was really necessary to go through all that or if you could have just let him go much earlier.
PICO Question:
Population: Patients with acute opioid overdose requiring naloxone administration with return to baseline.
Intervention: Transport to the hospital by EMS or ED observation.
Comparison: Immediate release by paramedics or from the ED
Outcome: Rebound opioid toxicity resulting in death or anoxic brain injury, or requiring additional administration of naloxone.
Search Strategy:
PubMed was searched using the following strategy: “naloxone AND overdose AND (release OR transport OR recurrence)” (http://tinyurl.com/n937tns). The 57 resulting citations were searched for observational or randomized controlled trials evaluating the safety and efficacy of immediate release for patients responding to naloxone after opiate overdose. The following 4 articles were selected for review.
Bottom Line:
Opioid overdose remains an increasing problem in the United States, resulting from the high prevalence of heroin abuse and the increasing abuse of prescription opioids. Death due to opioid overdose has been increasing in recent years, prompting some communities (such as Boston, MA) to institute programs for bystander administration of naloxone to reverse toxicity. Whether administered by a bystander or EMS, naloxone typically results in adequate reversal of toxicity. In most EMS systems, such patients are then transported to an ED for observation, given the theoretically short-lived action of naloxone and concerns for recurrent toxicity. Some have proposed that such observation is unnecessary, and that such patients can be released at the scene as long as they return to their baseline. We sought to evaluate the evidence for such a proposed “treat and release” protocol, and to assess the incidence of recurrent toxicity following naloxone administration.
In those studies involving the treatment and release of patients by EMS, without transport to the hospital, the risk of death from recurrent opioid toxicity was low, ranging from 0% (Vilke 2003, Wampler 2011) to 0.13% (Rudolph 2011). The 3rd of these studies was conducted in Denmark, where a physician is present in the field to assess the patient and make transport decisions. This makes it difficult to extrapolate the study’s results to our EMS system. A strength of this study was Denmark’s centralized database inclusive of all citizens, allowing the authors to easily find anyone that died after being released. In addition, the authors were able to obtain all forensic data surrounding the deaths to determine which were from rebound opioid intoxication and which were from other causes. The former two studies, on the other hand, were conducted in large US cities (San Diego, CA and San Antonio, TX respectively) with more familiar EMS systems, and demonstrated similarly low rates of death due to recurrent toxicity. While no deaths were found in the US studies, it is possible if people died in different counties that they could have been missed, as the US does not have a central database. A fourth study based in Helsinki showed similar results; the authors found no cases of patients dying from recurrent opioid toxicity (Boyd 2006).
A single ED-based study evaluated the risk of toxicity recurrence based on a retrospective chart review (Watson 1998). A Delphi Panel was employed to determine whether rebound occurred following a response to naloxone administration in the ED. In this small study (42 total cases), recurrence of toxicity was identified as either definite or probable in 13 cases (31%, 95% CI 17-45%). Interestingly of the 13 cases, only 2 were noted to have recurrent respiratory depression. The other 11 patients had decreased mental status without any respiratory compromise. The authors later state that neither of the patients with respiratory depression received any specific treatment. As such, it is difficult to determine the significance of their respiratory depression. These results are confounded by differences in baseline patient characteristics from those in our practice as well as the use of a highly subjective outcome. Specifically, nearly 50% of patients in this study presented following suicide attempt, and 81% of cases involved an oral ingestion of opiates. Anecdotally, the large majority of opiate overdoses seen in our setting involve the recreational use of IV heroin, and recurrence rates may therefore be different. Additionally, in those patients receiving naloxone initially, only 27% were noted to have respiratory depression, and thus the indication for naloxone is suspect. Recurrence of toxicity, the primary outcome, was highly subjective and thus determined by a Delphi panel. Only 18 of the 42 cases considered to have had recurrence were deemed “definite,” with the remaining only “probable.” Even in those cases with definite recurrence of toxicity, the exact indication for repeated doses of naloxone remain unclear, and there is no evidence that more serious patient-centered outcomes were prevented in any of these cases.
The bulk of this data supports the “treat and release” strategy adopted by many EMS systems, with the caveat that such a strategy be employed in select patients who have returned to baseline with stable vital signs and are capable of understanding the risks associated with discharge in the field. If patients want to go to the ED, this should still be encouraged as patients could be evaluated for drug related infectious diseases, as well as receive information about addiction treatment and other social services. Transporting the patient against their will, and holding them in the ED, is probably unnecessary and does not seem to be supported by available evidence. However if the patient took a longer-acting opioid such as methadone, it may be prudent to specifically warn them of possible risks associated with these agents as studies did not specifically look at the safety of a “treat and release” strategy in patients exposed to long-acting opioids.
Thrombolytics in Submassive Pulmonary Embolism
Oct 30, 2014
Journal Club Podcast #16: September 2014
Maia Dorsett and I chat about thrombolytics, pulmonary emboli, and Muppets. That's right, Muppets...
You are working a day shift in your ED when you meet a generally healthy 55 year-old male who acutely developed chest pain and shortness of breath at home. He tells you that he underwent an orthopedic procedure 10 days prior. He is tachycardic and has an oxygen saturation of 94% on room air. You order an EKG, which demonstrates an S1Q3T3 pattern and a troponin level which is very mildly elevated at 0.12. You think to yourself, “I’ve got this diagnosis” and order a PE protocol CT, which identifies bilateral acute pulmonary emboli with a significant clot burden as well as dilation of the right ventricle. A bedside cardiac ultrasound was suggestive of right ventricular dilatation, so you send the patient to the Cardiac Diagnostic Lab for a formal ECHO, which reveals a markedly dilated right ventricle with flattened septal motion, McConnell’s sign of right ventricle apical hyperkinesis, and tricuspid regurgitation consistent with right ventricular dysfunction in the context of an acute pulmonary embolism.
Given the relatively young age of the patient and his previously normal cardiac function, you consider whether to offer thrombolytic therapy given the degree of right ventricular dysfunction seen on ECHO. However, the patient has a SBP > 110 and is on room-air, and hence does not meet criteria for a “massive” PE. You wonder whether there is any recent data that can help you decide whether to offer thrombolysis in more “submassive” PE. You ask yourself “what can I tell my patient about the effect on his overall risk of mortality and long term functional outcome?”
You begin by reading a previous Washington University Journal Club on this topic from July 2010, at which time the evidence was sparse, and a firm conclusion could not be drawn. You decide to see if any new literature exists, formulate your PICO question, and start your search. Luckily for you, there have been a lot of recently published articles attempting to answer your very question.
PICO Question:
Population: Adult patients with pulmonary embolism and evidence of right heart strain, with stable hemodynamics.
Intervention: Thrombolytic therapy.
Comparison: Anticoagulation with heparin, low molecular weight heparin (LMWH), or novel anticoagulants.
Outcome: Death, hemodynamic collapse, need for intubation, long-term functional outcomes, and quality of life.
Search Strategy:
In addition to the recently published PEITHO study, a PubMed search was performed using the strategy: ((“Thrombolytic Therapy"[Mesh] OR "Fibrinolytic Agents"[Mesh]) AND "Pulmonary Embolism”[Mesh]) with filters for meta-analysis, randomized controlled trial, and limited to the last 5 years (http://tinyurl.com/omwebmt). This resulted in 22 articles, from which 2 RCTs and a meta-analysis were chosen.
Bottom Line:
Venous thromboembolic disease, including both deep venous thrombosis (DVT) and pulmonary embolism (PE), is a prevalent condition, affecting an estimated 300,000 to 600,000 individuals in the US each year (Beckman 2010). Pulmonary embolism alone affects approximately 23 individuals per 100,000 (Anderson 1991). Given the high mortality of nearly 50% (Kucher 2006) in patients with PE and hemodynamic instability - i.e. “massive” PE - guidelines universally recommend considering the use of systemic thrombolytics in such cases (AHA, ESC, ACEP, NICE).
The use of thrombolytic therapy in hemodynamically stable patients with signs of right ventricular (RV) dysfunction - i.e. “submassive” PE - is highly debated. The guidelines vary widely in the their recommendations, from firmly stating “Do not offer pharmacological systemic thrombolytic therapy to patients with PE and haemodynamic stability” (NICE), to noting that there is insufficient evidence to make recommendations (ACEP), to clearly recommending thrombolysis when there is evidence of RV dysfunction or elevated cardiac biomarkers (AHA). In the last year, at least 2 randomized controlled trials evaluating thrombolysis in submassive PE have been published, along with a recent meta-analysis, which dwarf much of the data previously published. As a result, it will be important to review this new data and update existing guidelines accordingly.
While mortality rates in hemodynamically stable patients with PE are much lower than in those with massive PE, ranging from 3% to 15%, the presence of RV dysfunction on ECHO confers nearly double the risk of death (unadjusted risk ratio 2.4), while elevations in cardiac biomarkers confer an even greater increased risk (Sanchez 2008). In addition to potential mortality benefit from thrombolysis, some have theorized a potential to decrease the long-term morbidity associated with large clot burden. The incidence of symptomatic chronic thromboembolic pulmonary hypertension (CTEPH) following venous thromboembolic disease is low, at around 4% at 2 years (Pengo 2004). There is no conclusive evidence that large clot burden, RV dysfunction, or elevated cardiac biomarkers increase this incidence. Prior observational studies have suggested that this incidence is reduced by the use of thrombolytics (Kline 2009), but no randomized trials have supported this finding conclusively.
A prior journal club on this topic conducted in 2010 was limited by the lack of randomized trials assessing patient-important outcomes. We therefore sought to update our findings in light of the recent influx of higher quality evidence in this area. The MOPETT trial evaluated the use of low-dose tPA in those with “moderate” PE, here defined as PE involving 2 or more lobar or main pulmonary arteries. A significant decrease in the incidence pulmonary hypertension was observed (number needed to treat [NNT] 2.4) with no difference in mortality or recurrent PE. Oddly, there were no bleeding events reported in either group. The primary outcome here was unfortunately based on ECHO findings, and does not necessarily translate into patient-important outcomes. In contrast to MOPETT, the TOPCOAT study sought to evaluate functional capacity and quality of life following the use of tenecteplase in patients with PE and either RV dysfunction or elevated cardiac biomarkers. The authors found a reduction in a composite outcome of “adverse events” defined as recurrent PE or DVT, death, poor functional capacity, or poor quality of life (NNT 4.5). However, the groups had similar mortality rates, and it is unclear from the study whether the driving factor for these results was a decrease in patient-important outcomes (quality of life), or surrogate outcomes (ECHO findings). This fact, along with the unfortunate note that the study was stopped early, makes interpretation of these results difficult.
The largest study on this topic to date (PEITHO) was a double-blind study in which patients with both RV dysfunction and an elevated troponin were randomized to either anticoagulation alone, or anticoagulation plus a bolus of tenecteplase. They reported a significant decrease in the combined risk of death or hemodynamic decompensation at 7 days (NNT = 33), but a significant increase in the risk of major bleeding (NNH = 14). Mortality itself was similar in the two groups, and the benefit seen in the primary outcome was derived largely from a decreased risk of hemodynamic decompensation. Given the high risk of bleeding found, particularly intracranial bleeding, it would be important to know the functional outcomes of survivors. It is difficult to fully assess the trade-off in this study between benefit (decreased risk of hemodynamic decompensation) and harm (increased risk of major bleeding).
Finally, a recent meta-analysis (Chatterjee 2014) demonstrated an improvement in mortality, with a NNT of 59 for thrombolysis in all patients with PE, regardless of severity. This mortality benefit was balanced by an increased risk of major bleeding, with a NNH of 18. Specifically, for patients with “intermediate-risk” PE, the mortality benefit was slightly smaller (NNT 65) while the risk of major bleeding was the same (NNH 18). The degree of heterogeneity in the included studies, including poorly defined criteria for “intermediate-risk,” poorly defined criteria for “major bleeding,” and differing routes of administration of thrombolytics, limit our ability to draw firm conclusions from the meta-analysis. For example, the ULTIMA study was included in both meta-analyses, and involved ultrasound-assisted catheter-directed administration of thrombolytics. One could argue that pooling results from such disparate studies make little clinical sense, and one should interpret these results cautiously (if at all).
Despite an influx of new evidence over the last few years, the debate over thrombolysis in “submassive” or “intermediate-risk” PE rages on. Recent studies have failed to show a mortality benefit, whether measured 5 days (TOPCOAT), 7 days (PEITHO), or months (MOPETT trial) after administration, and a trade-off between hemodynamic decompensation and bleeding predominates. Some have proposed longterm functional benefits in those with significant clot burden or evidence of RV strain, but high-quality studies have yet to demonstrate an improvement in patient-important functional outcomes. While MOPPET demonstrated improved pulmonary artery systolic pressures on ECHO, the relationship between such a finding and patient-centric outcomes is unclear at best. The TOPCOAT study demonstrated an improvement in a composite outcome including death, recurrent DVT or PE, ECHO findings, functional capacity, and quality of life, but the contribution from each individual component is unclear, and it is difficult to translate these findings into clinical practice. For now, thrombolysis should be reserved for those with hemodynamic decompensation, or those felt to be at high risk of decompensation whose bleeding risk does not outstrip the potential benefit.
Non-operative Management of Pediatric Appendicitis
Sep 12, 2014
Journal Club Podcast #16: August 2014
I sit down with my pediatric EM homies - Drs. Trehan and Horst - and shoot the breeze about pediatric appendicitis...
You are working in the ED in rural Missouri and receive sign out on a ten-year old male presenting with apparent uncomplicated appendicitis. The patient presented with increasing right lower quadrant pain for the past 36 hours, nausea, and anorexia. He did not have a fever and was able to walk into the emergency department. His vital signs are as follows: BP 110/60, HR 85, RR 12, Sat 98% on room air, T 36.5. Your colleague believes that the patient has appendicitis without peritonitis on exam (RLQ pain on palpation, but no guarding or rebound tenderness). Your colleague had already consulted the general surgeon on call who agreed with the physical exam findings, but insisted on an abdominal/pelvis CT (with IV and oral contrast).
The CT revealed acute appendicitis with an appendiceal diameter of 1 cm without phlegmon, abscess, or fecalith. After you receive sign out on this patient, you prepare to transfer him to a children’s hospital as there are no pediatric surgeons at your hospital. However, the family would prefer to be treated at the local hospital, as the nearest children’s hospital is quite far away and it would be a hardship on the family to receive treatment there. Furthermore, the family asks if surgery is truly necessary as a family member recently died due to complications of a surgery. You consider the question, and begin to search the literature to see if there is any evidence that non-operative management of appendicitis in children (with antibiotics) is safe and effective.
PICO Question:
Population: Children with uncomplicated appendicitis
Intervention: Non-operative management with antibiotics
Comparison: Appendectomy
Outcome: Complications (wound infection, perforated appendix); recurrent appendicitis; resolution of symptoms; need for appendectomy despite initial non-operative management.
Search Strategy:
PubMed was searched using the strategy: (nonoperative OR non-operative) AND (appendicitis) AND (children), resulting in 66 citations. The “10 years” publication date filter was applied, reducing the total to 46 results (http://tinyurl.com/mlrljws). The titles and abstracts of these 46 articles were reviewed to assess for relevant studies involving children with uncomplicated appendicitis, and 4 relevant articles were selected.
Bottom Line:
Appendicitis remains a common diagnosis, with an annual incidence of approximately 250,000 in the US; the highest incidence occurs in pediatric patients between the ages of 10 and 19 years (Addis 1990). A total of over 216,00 children required urgent appendectomy in the US between 2006 and 2008 (Masoomi 2012). While the overall complication rate from surgical appendectomy is low (~2.5%), surgery can be associated with significant time out of school for children and time off work for parents. Non-operative management of pediatric appendicitis therefore offers the theoretical advantage of reducing these often ignored societal costs. While studies in adults have demonstrated better pain control, fewer complications, and shorter duration of sick leave with non-operative management (Mason 2012), the evidence is less well-defined in the pediatric population.
We identified a handful of small studies of varying methodological quality for our review. A single randomized controlled trial from Sweden was identified that enrolled 50 children with non-perforated appendicitis found that 2 children out of 24 randomized to non-operative treatment required appendectomy within 3 months; an additional 7 children underwent appendectomy within one year, for an overall 62% rate of avoidance of appendectomy (Svensson 2014). There were no major complications in either group, and the length of stay for the initial hospital admission was shorter in the operative group (median 34.5 hours vs. 51.5 hours, p = 0.0004). The authors reasonably conclude that “nonoperative treatment is feasible and safe.”
Unfortunately, this randomized controlled trial did not evaluate the effect of non-operative management on other important outcomes to parents and children, such as time of work for the parent or time to return to school and normal activities for the child. These outcomes were addressed in a prospective nonrandomized trial from Columbus, Ohio (Minneci 2014). Children with uncomplicated appendicitis with symptoms lasting less than 48 hours were eligible for enrollment. The decision to proceed with surgery or non-operative management was made by the families of eligible patients after counseling using a standardized, scripted presenting process by 1 of 3 trained physicians in order to minimize selection bias. A total of 77 children were enrolled, 30 of whom underwent non-operative management with the remainder undergoing laparoscopic appendectomy. Three children initially managed non-operatively required appendectomy within 30 days of enrollment, for a 30-day success rate of 90% (95% CI 79-100%). Patients in this group had longer initial hospital stays (median 38 vs. 20 hours), but fewer days to return to normal activity (median 3 vs. 16.5), fewer days of missed school (median 3 vs. 5). While quality of life questionnaire scores were slightly higher for the non-operative group, overall parental satisfaction scores were the same for both groups, and were quite high.
Two additional retrospective studies of lesser methodological quality were identified. A case series of 24 patients treated by a single pediatric surgeon in Ontario (Armstrong 2014), 3 out of 12 patients initially managed non-operatively required appendectomy within the follow-up period (median of 6.5 months follow-up). One patient in the non-operative group developed a post-operative infection compared to two in the operative group; none of these required further surgery or led to long-term complications. A final case series from Turkey in which 16 patients with appendicitis treated non-operatively was reviewed (Abes 2007). One patient failed initial non-operative management and required appendectomy after one day of antibiotics, while 2 additional patients developed recurrent appendicitis within one year and required appendectomy. The overall 1-year success rate was therefore 81% (95% CI 57-93.4%). No major complications were reported.
The data, while limited, suggest that non-operative management of pediatric appendicitis with antibiotics is safe and does not result in significant morbidity. It also appears that non-operative management results in less time out of school and a shorter time to return to normal activities, and most likely would lead to less time off work for the parents. It therefore seems reasonable to offer an initial attempt at non-operative management to the parents of select children with uncomplicated acute appendicitis. The final decision will depend on many factors, including the comfort-level of the parents, child, and surgeon, as well as the potential affect of the decision on child and parental activities.
Tranexamic Acid (TXA) for Hemorrhage in Trauma
Jul 27, 2014
Journal Club Podcast #15: July 2014
I'm joined this month by Drs. Wes Watkins and Bill Dribben to talk about the use of tranexamic to reduce hemorrhage in trauma patients...
As you are sitting in a break room at the NATO Role 3 Hospital at Kandahar Air Field (KAF) eating your 7th bag of beef jerky and waiting 45 minutes for a 2 minute you tube video to download you hear on the radio that a 9-line MEDEVAC flight is inbound with a severely wounded soldier. The pt arrives and the flight medic relays the history of a 24 y/o active duty army soldier that was on patrol in Southern Afghanistan when a dismounted improvised explosive device (IED) detonated. He had no obvious head injury and a GCS of 15 but suffered significant extremity injuries, blood loss, and multiple traumatic amputations; tourniquets were applied to all 4 extremities. Primary and secondary survey reveals the following injuries: traumatic BKA bilateral LE’s; fragment wounds right buttock; fingers 3-5 traumatic amputation with open metacarpal fracture left hand; fragment wounds left forearm with transection of the ulnar artery and nerve; traumatic AEA amputation right UE; fragment wounds right axillary artery; fragment wounds to neck; and bilateral ruptured TM’s. CT H/C/A/P was negative and C/T/L spine were radiographically normal.
You initiate Damage Control Resuscitation (DCR) and Massive Transfusion (MT) guidelines and the patient received 38 units PRBC’s; 33 units FFP; 4 units of platelets; 30 units of cryoprecipitate; and Tranexamic acid (TXA) 1 g IV in 100 ml NS over 10 min followed by 1 g IV in NS over 8 hr in the ED and OR. In the OR the pt underwent D&I of his wounds with conversion of his LUE injury to a mid-forearm amputation and BLE injuries to AKA’s. Post–op H/H 9.9/26.5 and INR 1.2. Pt was airlifted by a Critical Care Air Transport Team (CCATT) to Bagram Air Field (BAF) for further stabilization. At BAF pt was taken to OR for further wound management and received 10 units PRBC’s, 4 units FFP, 1 unit of platelets, and 10 units Cryoprecipitate. Post-op H/H 11.2/31.8 , platelet count 105, INR 1.2. A chest CT revealed a PE and pt was started on Heparin. His Bun/Cr and K+ began to increase and he was started on continuous renal replacement therapy (CRRT). The patient was then validated for air movement to Landsthul Army Medical Center (LRMC), Germany via CCATT. Prior to flight CRRT was discharged and H/H was 7.2/21, pt was given 2 units PRBC’s and loaded onto the aircraft with follow-up H/H 8.5/25. During the flight, his H/H decreased to 6.8/20 and 1 unit PRBC’s and 1 unit FFP were transfused with increase in H/H to 7.5/22. The pt was safely delivered to the ICU at LRMC.
Total time from point of injury (POI) to LRMC ~ 48 hrs. and 51 units PRBC’s, 38 units FFP, 5 units of platelets, and 40 units cryoprecipitate were transfused. You begin to wonder what affect the TXA really had, and what evidence there is to support its use in military and civilian settings. You sit down to begin your literature search...
PICO Question:
Population: Civilian trauma patients presenting to the ED with hemorrhage requiring massive transfusions.
Intervention: Administration of TXA in the ED.
Comparison: Standard resuscitative measures.
Outcome: Overall mortality, death due to bleeding, thromboembolic events, coagulopathy, transfusion requirements.
Search Strategy:
MEDLINE was searched via PubMed using the strategy “tranexamic acid trauma” resulting in 299 articles (http://tinyurl.com/qxn4c45). The 4 most relevant articles were then chosen.
Bottom Line:
Traumatic injuries represent a significant source of morbidity and mortality worldwide, with hemorrhage responsible for 30% of in-hospital trauma deaths each year. Tranexamic acid (TXA) has been used for decades to reduce bleeding in a variety of situations, including cardiopulmonary bypass, menorrhagia, and upper GI bleeding. TXA is a lysine analog that binds the lysine-binding site on plasminogen and prevents its conversion to plasmin, which would lead to fibrin degradation (Okamoto 1997). This antifibrinolytic property theoretically leads to decreased hemorrhage and blood loss, and would potentially benefit those at risk of bleeding due to significant trauma.
The CRASH-2 (Clinical Randomization of an Antifibrinolytic in Significant Haemorrhage) trial, published in 2010, was an international, multicenter study that sought to evaluate the benefit of TXA in trauma from a global health perspective. Patients were enrolled primarily in low and middle-income countries, where 90% of trauma deaths occur worldwide. Over 20,000 trauma patients from 274 hospital in 40 countries were enrolled and randomized to receive either TXA of placebo. A small but significant reduction in mortality was observed in those patients receiving TXA (absolute risk reduction [ARR] 1.5%, 95% CI 0.49 to 2.47), with a number needed to treat (NNT) of 68. There was no observed increase in the number of vascular occlusive events (PE, DVT, MI, CVA), the most dreaded potential complication of TXA. Critics of the study point mainly to the subjective nature of the inclusion criteria (patients at risk of “significant hemorrhage”) and exclusion criteria (those with clear indications or contraindications to TXA), the low risk of hemorrhage in the study (only half of the patients required transfusion), and differences in transport time and trauma care between the included countries and the United States. Despite these critiques, this study led to the inclusion of TXA in the World Health Organizations list of essential medicines.
A subgroup analysis of data from the CRASH-2 trial attempted to further clarify situations in which TXA would be beneficial in trauma. In this analysis, a reduction in death due to bleeding was noted in those who received TXA less than 1 hour from the time of injury (RR 0·68, 95% CI 0·57–0·82) and those who received TXA between 1 and 3 hours from the time of injury (RR 0·79, 95% 0·64–0·97). Interestingly, a paradoxical increase in the risk of death due to bleeding was observed in those who received TXA more than 3 hours from the time of injury (RR 1·44, 95% CI 1·12–1·84). The authors also evaluated the effect of TXA on death due to bleeding based on initial SBP, GCS, and type of injury, but did not find homogeneity in the results of these subgroup analyses.
A retrospective analysis of 896 patients treated in the military was undertaken using a cohort of patients treat at Camp Bastion Hospital in Afghanistan, of whom 293 (32.7%) received TXA. Both NATO personnel (US and UK military) and Afghan national military personnel were included in the study. Given the observational nature of this study, it is not surprising that patients who received TXA were much sicker than those who did not receive TXA, based on Injury Severity Score (ISS), Abbreviated Injury Scale (AIS), Revised Trauma Score (RTS), initial Glasgow Coma Score (GCS), and systolic blood pressure. Despite have worse baseline prognosis, patients who received TXA had decreased mortality compared to those who did not, with an unadjusted RR for in-hospital mortality of 0.73 (95% CI 0.54 to 0.98). In a subset of patients who met criteria for massive transfusion (requiring 10 or more units of blood within 24 hours), a similar reduction in in-hospital mortality was observed with the use of TXA (RR 0.51, 95% CI 0.32 to 0.83). Multivariate logistic regression analysis revealed that the use of TXA was independently associated with reduced mortality in the massive transfusion subgroup of patients, with an odds ratio for survival of 7.23 (95% CI 3.02 to 17.32).
As in the CRASH-2 trial, differences between patients enrolled in the MATTERs study and those cared for in a civilian US level 1 trauma center make it difficult to apply the results to our patient population. Patients in MATTERs were quite young, with a mean age of 23, and were almost entirely male. Additionally, the mechanism of injury was predominantly explosion, accounting for three-fourths of the patients receiving TXA and over 60% of those not receiving TXA. The remainder of the injuries were due to gunshot wounds, presumably mostly from large caliber, military weapons. The injury patterns involved likely resulted in significantly more hemorrhage than would typically be observed in motor vehicle crashes and smaller caliber gunshot wounds, and the effect of TXA potentially magnified.
To evaluate the effect of TXA on a civilian US population, a retrospective analysis of outcomes was conducted at Ryder Trauma Center in Miami, FL over a 3-year period from August, 2009 to January, 2013. A cohort of 150 patients who were given TXA during that time period (at the discretion of the treating physician) was matched to a similar cohort using propensity matching. Patients were matched based on age, sex, presence of traumatic brain injury, mechanism of injury, systolic blood pressure, need for blood transfusion, and injury severity score (ISS). They found that patients who received TXA had increased mortality (31% v 23%), though this difference was not statistically significant (RR 1.31, 95% CI 0.90 to 1.92). By excluding groups of patients (those that died within 2 hours of arrival, those with TBI, those that received less than 2 liters of blood, and those with a systolic blood pressure over 120 mmHg), the authors were able to find statistically significant increases in mortality with the use of TXA.
While this study would seem to suggest significant harm from the use of TXA in a civilian US trauma center, this seems unlikely. Given that patients who received TXA in this study were taken to the OR more rapidly than their counterparts (median 24 minutes vs. 35 minutes) and required significantly more blood products over the first 24 hours (2,250 vs. 1,999 mLs), it seems likely that this was a baseline sicker cohort of patients who began their course with a more sinister prognosis. Additionally, the observed mortality increase was not statistically significant due to the underpowered nature of the study; while statistical significance was achieved by excluding certain groups of patients, these were subgroup analyses and must be validated prospectively before changing practice. Unfortunately, the authors do not report the incidence of thrombotic events; aside from thrombosis, no reasonable mechanism for the observed increase in mortality can be attributed to TXA
Conclusions
The bulk of the existing evidence suggests that TXA reduces mortality in trauma patients with hemorrhage. This includes data from a cohort of patients with less significant hemorrhage (CRASH-2) as well as a military cohort of patients with significant hemorrhage and those requiring massive transfusion (MATTERs). While the single study performed on US patients revealed a non-statistically significant increase in mortality with the use of TXA, the methodological limitations of this case control trial, and the large disparity of its results compared to other existing studies, bring these results under scrutiny. For now, the evidence suggests that TXA, when given within 3 hours of injury, likely reduces mortality in patients at risk of significant traumatic hemorrhage, and should at least be considered in those requiring massive transfusion protocol initiation.
Antibiotics for Anterior Nasal Packing in Epistaxis
Jun 13, 2014
Journal Club Podcast #14: May 2014
I join...myself, to talk about the routine administration of prophylactic antibiotics for anterior nasal packing in epistaxis...
You are working in your community emergency department (ED) one winter afternoon when you encounter Mr. D. He is a 64 year old gentleman with a history of hypertension, who takes amlodipine and a daily 81 mg aspirin. He presents with one hour of continuous right-sided epistaxis. You have him blow his nose to evacuate all of the clot and spray phenylephrine in both nares. You take two cotton-balls, soaked in a mixture of phenylephrine and viscous lidocaine, and place one in each anterior nasal cavity. You then apply a nose clip and leave him alone for 20 minutes.
When you return and remove the cotton-balls, intent on cauterizing the offending area with silver nitrate, you find that there is still a continuous ooze from the right anterior septal mucosa that is too brisk to allow for cautery. You bite the bullet and grab a 5.5 cm Rapid-rhino and insert this in the right nare. After inflating the balloon, you find that you have achieved good hemostasis. You call your on-call ENT to arrange a follow-up appointment in the next 2-3 days for removal of the Rapid-rhino and further assessment of the epistaxis. The ENT agrees to see the patient, and at the end of the call reminds you to send the patient home on oral antibiotics until follow-up. You send the patient home with a prescription for amoxicillin-clavulanic acid.
You are back working again 4 days later, when Mr. D returns to the ED. His nasal packing was removed the day prior without incident and a small area in his nasal mucosa was cauterized successfully by the ENT physician. Mr. D is presenting now with horrible diarrhea and abdominal cramping. As you leave the room and get ready to order a C. diff toxin screen for his stool, you wonder if it’s really worth putting patients with anterior nasal packing on prophylactic antibiotics, given the adverse symptoms often encountered. You decide to check the literature when you get off work.
PICO Question:
Population: Adult patients with anterior nasal packing in place for epistaxis
Outcome: Toxic shock syndrome, sinusitis, otitis media
Search Strategy:
You search PubMed using the terms “epistaxis AND antibiotics,” and identify 200 articles, 3 of which are most relevant to the PICO question (http://tinyurl.com/kcptxrh). Unable to find a 4th article specifically addressing antibiotics in nasal packing for epistaxis, you instead choose an article from the bibliography of one of the selected articles that addresses antibiotic use following septoplasty.
Bottom Line:
Epistaxis is a common problem, with a lifetime incidence of approximately 60% (Gifford 2008). While the majority of cases do not require medical attention, epistaxis remains a common chief complaint in the ED. The management of epistaxis is highly variable, with a wide range of packing implements and hemostatic agents currently available. In one survey of otolaryngologists in England and Wales, over three-fourths of patients admitted to the hospital for epistaxis required nasal packing, with anterior packing employed in the vast majority of these cases (Kotecha 1996). While the use of nasal packing has likely decreased since this study’s publication, and is likely much lower when considering all patients presenting to the ED, anterior packing remains a common procedure for emergency physicians.
The role of prophylactic systemic antibiotics when anterior nasal packing is employed remains highly controversial. The authors of the American College of Emergency Physician’s Focus on Treatment of Epistaxis note that while direct evidence is lacking, “most sources recommend TMP/SMX, cephalexin, or amoxicillin/clavulanic acid to prevent sinusitis and toxic shock syndrome [TSS].” While the prevention of toxic shock syndrome is often cited as a reason for prescribing antibiotics in these cases, this serious complication is exceedingly rare. The incidence of TSS following nasal surgery is approximately 16.5 in 100,000, or 1 in approximately 6000 cases. While the exact incidence of TSS following anterior nasal packing in epistaxis is unknown, no cases have been reported in the literature. Of 61 cases of TSS identified in the Minneapolis-St. Paul area between 2000 and 2006, none were attributed to an upper respiratory source (Devries 2011).
A survey of physicians in the United Kingdom conducted in 2005 revealed that 78% of interviewees believed that the use of prophylactic antibiotics with anterior nasal packing reduced the incidence of infection (Biswas 2006). There is, however, limited evidence regarding the effect of antibiotics on the infectious complications of packing. One large randomized trial evaluating the use of prophylactic antibiotics with nasal packing following septoplasty found no difference in post-operative pain, infectious symptoms, or the amount of purulent nasal discharge with or without prophylactic antibiotics (Ricci 2012). These results support the findings of a previous systematic review of post-operative nasal surgery patients (Georgiou 2008); however, their applicability to patients with anterior nasal packing for epistaxis is unclear. While differences in packing location (anterior vs. posterior), sterility of the environment (operative room vs. ED), and the nasal cavity itself (post-surgical vs. non-instrumented) may have some effect on the incidence of infectious outcomes, it seems reasonable to extrapolate these results to our patient population.
Unfortunately, no randomized controlled trials evaluating the effect of antibiotics on outcomes following epistaxis could be identified. What evidence does exist, however, suggests that antibiotics are unnecessary and potentially harmful. Anterior nasal packing and antibiotic administration have been found to have no effect on the microbiological flora of the nasal cavity following epistaxis (Biswas 2009). Two before and after studies evaluating changes in protocol, with a move away from the use of routine prophylactic antibiotics, also found that antibiotics had no affect on more clinically important patient outcomes. The first of these studies (Pepper 2012) enrolled 159 patients, of whom approximately half were treated with prophylactic amoxicillin-clavulanic acid or clarithromycin while the other half received no antibiotics. No infectious complications (sinusitis, otitis media, or TSS) were identified in either group. In the latter study (Biggs 2013), 38 patients were enrolled prior to the implementation of a protocol to reduce antibiotic use, while 19 patients were enrolled following implementation. The rate of antibiotic use was significantly reduced by the protocol, from 74% to 16%. The authors found no difference in infectious symptoms between the groups at 6-week telephone follow-up.
Unfortunately, none of these studies assessed adverse reactions to the antibiotics administered. Rates of serious adverse reactions to antibiotics can be difficult to estimate. One report estimates the rate of anaphylaxis from antibiotic administration to be around 1 in 5000, somewhat lower than the 1 in 6000 rate of TSS in nasal packing following nasal surgery. Assuming a similar rate of TSS following anterior packing in epistaxis, and assuming that antibiotics completely eliminated its occurrence, the benefit would still be outweighed by the risk of serious harm. While this analysis does not account for other infectious complications - such as sinusitis and otitis media - it also does not account for other serious complications of antibiotic use - such as Stevens-Johnson syndrome and Clostridium difficile infections - or other less serious adverse reactions - such as rash, diarrhea, nausea, and vomiting. Given the low reported incidence of sinusitis and otitis media in the literature following nasal packing, it seems highly likely that the risks of antibiotic administration would outweigh the benefits when all complications are considered. Unfortunately, the rarity of both infectious and drug-related complications mean that large sample sizes would be required to definitively demonstrate the superiority of either treatment option, and such a study seems unlikely to occur in the near future.
The current literature on this topic is unfortunately lacking in both methodology and sample size, and it is difficult to make firm conclusions. This conundrum is common, and the typical question remains: “What amount of evidence is required to change our practice?” Given that our “standard of care” is to administer prophylactic antibiotics when we place anterior packing in the ED, and that this is frequently the recommendation of our ENT consultants, some would argue that rigorous evidence is needed before we can safely advocate for a change in practice. In the parlance of our legal system, we would need to prove “beyond a reasonable doubt” that the risks of antibiotic administration outweigh the benefits. I would instead argue that a civil (rather than criminal) burden of proof applies, and that we need merely prove our case “by a preponderance of the evidence.” As is often the case, our current practice and our colleagues’ usual recommendations are based on anecdote and dogma, rather than on sound research and data. Given the very real potential for harm with unnecessary antibiotic administration, the current body of evidence simply does not support the routine administration of prophylactic antibiotics following anterior nasal packing in epistaxis.
Subcutaneous Insulin in the Treatment of DKA
May 07, 2014
Journal Club Podcast #13: April 2014
Dr. Chandra Aubin and I discuss subcutaneous fast-acting insulin as an alternative to insulin drips in the treatment of diabetic ketoacidosis...
One afternoon you are working in your emergency department (ED) and walk in to see a new patient. Mr. X is a 24 year-old with a history of type I diabetes who presents with a complaint of weakness. He reports to you that one month ago he lost his job and hence lost his insurance. He has been unable to afford his insulin or syringes and was trying to “stretch them out” by only using one injection per day. He ran out completely 3 days prior to arrival, and in the interim developed polyuria and polydipsia, followed by nausea, vomiting, and generalized weakness. He denies any infectious symptom, abdominal pain, chest pain, or shortness of breath, though he does appear mildly tachypneic. He is also mildly tachycardic, but otherwise afebrile and hemodynamically stable. His exam is unremarkable.
His blood sugar is checked at the bedside and is 540 mg/dL, and his finger-stick ketones are 4.3 mmol/L. You start by giving him a liter of normal saline (NS) while you await his chemistry labs, but you’re pretty sure he’s in diabetic ketoacidosis (DKA). His labs come back as follows: Na 131, K 4.1, Cl 100, CO2 13, and his anion gap is 22. You realize that you will need to treat his DKA, but are also aware that all of your ICU beds are full, and that you cannot send a patient to the floor on an insulin drip. The patient is also begging you to keep the cost of his care to a minimum, since he has no insurance at the moment. Given the availability of fast-acting insulin analogs (lispro and aspart) you wonder if there is any place for subcutaneous (SC) fast-acting insulin, as an alternative to a continuous infusion of intravenous (IV) regular insulin, in the management of mild to moderate DKA. You decide to do a brief search of PubMed to see if this is even a reasonable question…
PICO Question:
Population: Patients (adult or pediatric) with mild to moderate DKA.
Intervention: Subcutaneous fast-acting insulin analog (aspart or lispro)
Comparison: Continuous infusion of intravenous regular insulin.
Outcome: Duration of therapy, ICU admission, ICU length of stay, hospital length of stay, hypoglycemia, recurrence of DKA.
Search Strategy:
You use the PubMed advanced search builder to create the following search strategy: (aspart OR lispro) AND ((diabetic ketoacidosis) OR DKA) (http://tinyurl.com/q4neutu). This identifies 38 articles, from which the following 4 most relevant articles are chosen.
Bottom Line:
DKA is a relatively common and dangerous complication of diabetes in both children and adults, with an estimated mortality of around 13%. In 2009, seven of every 1000 diabetics were admitted to the hospital for DKA. The primary treatment is hydration, electrolyte monitoring, and insulin therapy, traditionally accomplished via IV regular insulin,. Both the American Diabetes Association (ADA) and the International Society for Pediatric and Adolescent Diabetes (ISPAD) recommend a continuous infusion of IV regular insulin as standard of care in the management of DKA. These recommendations are based primarily on studies from the 1970s (Menzel 1970, Fisher 1977) that suggested that the delayed onset and longer half-life of SC and IM regular insulin make these routes inadequate for the management of DKA. However, these studies evaluated the use of regular insulin, and pre-dated the development of fast-acting insulin analogs (aspart and lispro), which may be more efficacious in the management of DKA when administered by these alternate routes. Insulin lispro, for example, has an onset of action of 10 to 20 minutes and reaches peak concentration within 30-90 minutes when administered by SC injection (Holleman 1997).
Management of DKA with continuous IV insulin is typically accomplished in an ICU or intermediate-care setting. As the population ages, the demand for ICU beds is increasing, and availability is often limited. ICU admissions also drastically increase the cost of care. While patients in DKA are often critically ill, their care is generally algorithmic, and may not require ICU level care in those without severe DKA. Given that ICU care is often dictated by the use s continuous IV infusion of insulin, an alternative regimen that involves intermittent SC insulin may allow admission to general medical wards or “step-down” units.
The current body of literature comparing IV and SC insulin in DKA is comprised of small, randomized trials (Umpierrez 2004, Umpierrez 2004, Della Manna 2005, Karoli 2011). The outcomes from these studies suggest that the use of SC fast-acting insulin is both safe and effective at treating mild to moderate DKA. No differences were observed in the duration of therapy required to resolve hyperglycemia or DKA. The incidence of hypoglycemia was low in the all of the studies and similar with either treatment. There were no episodes of recurrent DKA or death were in any of the studies. In adults, an initial SC injection of 0.3 units/kg of insulin aspart or lispro can be given, followed by SC injections either hourly (0.1 units/kg) or every 2 hours (0.2 units/kg). In pediatric patients, it is reasonable to forego the initial bolus, and instead administer 0.15 units/kg every 2 hours.
While these regimens seems to be safe and effective, their benefit over traditional IV strategies is less clear. The primary potential benefit involves eliminating the need for ICU admission, and thereby reducing cost. Only one of these studies assessed the cost of care, and indicated a 39% reduction with subcutaneous insulin(Umpierrez 2004). However, it is quite possible that this cost difference was due to added ICU charges rather than a true difference in the intensity of care required (Haas 2004). Patients receiving SC insulin had blood glucose levels checked every hour, while levels in those receiving IV insulin were only checked every 2 hours. Given the frequency of insulin administration required with a SC strategy - every 1 to 2 hours - the amount of nursing time required may actually increase with SC insulin. If the use of SC insulin does not reduce ICU admissions, then there is no benefit, and IV regular insulin remains a logical treatment option. In a retrospective chart review of DKA patients treated with SC aspart at Rush University Medical Center in Chicago, there was still a mean ICU length of stay of 43.36 hours, indicating that such patients were still admitted to the ICU for initial management.
Perhaps, then, we should alter our question, and instead be asking whether patients with uncomplicated DKA be admitted to the ICU at all, regardless of the route of insulin administration chosen? A review of 67 cases of DKA admitted to the ICU at Truman Medical Center in Kansas City, MO found that over a third of patients did not warrant ICU treatment based on existing admission criteria. These data suggest that increased use of “step-down” or intermediate care units could reduce the need for ICU admission in uncomplicated DKA patients, whether IV or SC insulin strategies are employed.
Biomarkers (Procalcitonin) to Diagnose Sepsis
Mar 31, 2014
Journal Club Podcast #12: March 2014
I'm joined by Drs. Larry Lewis and Matt Dettmer to talk about using biomarkers, specifically procalcitonin, to diagnose sepsis...
You are rotating at a hospital just outside of Honolulu, HI. A patient presents from dialysis with nausea and vomiting. The patient has been feeling unwell for several days, presented to dialysis and had several episodes of non-bloody emesis. Their dialysis session was completed and then the patient was transferred to the ED for further evaluation. The patient denies abdominal pain and respiratory symptoms. The patient is afebrile, tachycardic with a heart rate in the 120s, a blood pressure of 95/65, mildly tachypneic with a respiratory rate of 22, saturating 98% on room air. On exam, the patient appears to have dry mucous membranes, clear lungs, soft abdomen. The patient has an in-dwelling right IJ dialysis catheter; the site is appropriately dressed, appears clean and dry and has no surrounding erythema.
Basic work-up is started and the patient has a clear chest X-ray, mild leukocytosis of 13,000, EKG showing sinus tachycardia. Creatinine is elevated to 5, other electrolytes are within normal limits. Point of care lactate is mildly elevated at 2.5. After a small bolus of IV fluids, the patient remains tachycardic with a soft blood pressure. You consider the possibility that the patient may be septic and consider the initiation of broad spectrum antibiotics.
Since your medical license hasn’t been processed, you cannot actively participate in the care of this patient. Nevertheless, your medical curiosity is piqued and you leap to a computer and begin a literature search to see if there is a role for biomarkers to help determine whether this patient would benefit from antibiotics. You develop the following PICO and begin your journey down the rabbit hole.
PICO Question:
Population: Adult patients presenting to the Emergency Department meeting SIRS criteria without clear source of infection.
Intervention: Use of procalcitonin to identify patients with bacterial sepsis
Comparison: Standard management
Outcome: Diagnosis of sepsis due to bacterial infection
Search Strategy:
You access PubMed and search the Clinical Queries tool using the terms "procalcitonin AND sepsis" with the narrow filter applied (http://tinyurl.com/pya5gs7). This results in 261 articles, from which you choose the following four, including recent meta-analysis on the topic.
Bottom Line:
Sepsis is defined as the presence of a systemic inflammatory response syndrome (SIRS) in the presence of an infectious etiology. Sepsis is often thought of as the result of a severe infection, resulting in a systemic inflammatory response syndrome (Stearns-Kurosawa 2011). In reality, sepsis represents a clinical spectrum of disease, ranging from a relatively mild immune response to devastating global systemic inflammation leading to vasodilation, hypoperfusion, and multi-organ system failure (Odeh 1996).
While patients on the milder end of the spectrum frequently fare well, mortality in those with severe sepsis and septic shock approached 50% in the late 1990’s (Rivers 2001). Even with the advent of early-goal directed therapy, and an increased recognition of the severity of these illnesses and need for timely intervention, mortality remained between 30 and 40% in the following decade(Ferrer 2008, Levy 2010, Castellanos-Ortega 2010). More recent evidence from the newly published ProCESS trial suggests that mortality is now closer to 20%, though this study was performed in academic centers in the US, and some are concerned that a Hawthorne effect may have contributed to this low mortality.
Regardless of the exact cause of this decreased mortality, it seems likely that earlier recognition of disease and initiation of aggressive management has led to improved outcomes in these sicker patients. For example, research has demonstrated decreased mortality associated with earlier administration of antibiotics (Gaieski 2010) and larger volume of fluid administered in the initial 3 hours (Lee 2012). Early diagnosis of sepsis, and differentiation from SIRS due to a non-infectious etiology is therefore critical in the management of these patients.
Early differentiation would benefit not only those patients with severe sepsis and septic shock, but could also improve outcomes in those ultimately diagnosis with a non-infectious etiology. In one study of severe non-infectious SIRS in the ICU, mortality was found to be similar to that in severe sepsis (Dulhunty 2008). While a large proportion of cases identified were either post-operative or trauma-related, and may be more easily differentiated from severe sepsis, many were due to underlying cardiovascular, pulmonary, GI, and neurologic conditions. Diagnosing such patients with severe sepsis, and treating them as such initially, has the potential to delay diagnosis and definitive treatment of the true underlying etiology. Additionally, the administration of broad-spectrum antibiotics in such cases would be of no therapeutic benefit, and may in fact be harmful. Data from the CDC indicate that the use of antibiotics confer a relative risk of developing of a C. difficile infection of 3.1 (95% CI 2.5 to 3.8), and suggest that a 30% decrease in the use of broad-spectrum antibiotics would reduce C. difficile infection rates by 26%.
Unfortunately, the differentiation of sepsis from non-infectious SIRS can be difficult, particularly in the early phase of the disease. As a result, several biomarkers, most notably procalcitonin, have been proposed as a means of making this differentiation. Procalcitonin is the peptide precursor of calcitonin. Its production is stimulated by endotoxins and cytokines, and is inhibited by interferon-gamma, a cytokine produced by viral infections. As a result, procalcitonin has been proposed as a means of differentiating bacterial infections from both viral infections and non-infectious pro-inflammatory conditions (Delevaux 2003).
We identified 4 articles looking at the use of procalcitonin specifically to distinguish sepsis from SIRS of non-infectious etiology. Three of the studies involved primary research conducted on Emergency Department (ED) patients (Tsalik 2012, Jaimes 2013, Loonen 2014), while the fourth was a systematic review and meta-analysis (Wacker 2013). Sensitivities were poor in the primary studies, ranging from 55% to 68%; specificity fared only slightly better, as low as 64% in one study, and as high as 97% when a higher cut-off was used (sacrificing sensitivity, which decreased to 18%). The sensitivity and specificity in the meta-analysis were 77% and 79%. The resulting likelihood ratios, reported in Table 1, indicate that the probability of sepsis changes very little with procalcitonin results. Some have suggested that while procalcitonin alone cannot differentiate noninfectious SIRS from sepsis, it can be used in conjunction with additional clinical information to aid in diagnosis and management. Unfortunately, such a role has not been well-defined, and no clinical decision rules involving procalcitonin have been developed to assist in sepsis diagnosis. If procalcitonin is to become a relevant aspect of sepsis care, additional research will need to identify a particular clinical role with an improvement in patient-oriented outcomes.
The meta-analysis we reviewed unfortunately suffered from a large degree of heterogeneity. Clinically, the included studies were conducted in a variety of settings, including pediatric, medical, and surgical ICUs, hospital wards, and the ED. Additionally, the studies evaluated procalcitonin using a wide range of cut-offs, 0.1 to 15.75 ng/mL, making interpretation of the reported test characteristics difficult.
Study
Cut-off
AUC
(95% CI)
LR+
LR-
Jaimes
0.3 ng/mL
0.69 (0.65-0.72)
1.77
0.57
Tsalik
0.1 ng/mL
0.5 ng/mL
3 ng/mL
0.72
(N/A)
1.9
3.2
6.3
0.51
0.68
0.84
Loonen
(2 ng/mL)
0.81 (0.70-0.91)
3.9
0.52
Wacker
various
0.85 (0.81-0.88)
3.7
0.29
The result was a high degree of reported statistical heterogeneity, with an I2 of nearly 78% for both sensitivity and specificity, and 96% for the bivariate model. Given this high degree of heterogeneity, it would have made more sense for the authors to report the data of the individual studies without performing a meta- analysis.
Unfortunately, sepsis research in general is handicapped by the lack of a gold standard. Reliance on source testing and culture results leads to high false negative rates, as blood cultures are positive in only around one-fourth of septic patients (Bates 1997). Diagnostic research on sepsis therefore relies on expert opinion and consensus to differentiate septic from non-septic patients. While there are methods to attempt to correct for the absence of a true gold standard (Reitsma 2009, Rutjes 2007), including the use of panel consensus as employed in our studies, these methods are lacking in methodological research. Despite the moderate to high rates of agreement noted in the studies by Jaimes and Tsalik (65% and 82%, respectively), this reference standard is far from perfect. Evaluation of the diagnostic accuracy of a test depends on how well the results of the test in question agree with outcomes based on the gold standard. When the reference standard is imperfect, the resulting test characteristics (sensitivity, specificity, likelihood ratios) are less reliable.
Lung-Protective Ventilation in the ED
Mar 04, 2014
Journal Club Podcast #11: February 2014
ICU gurus Nick Mohr and Brian Fuller join me to talk about preventing acute lung injury and ARDS in the ED...
It is 0200 and you are working in the trauma critical care (TCC) area of your beloved emergency department. During the chaotic shift, three patients require endotracheal intubation. The first patient is a 55 year old man with a history of lymphoma in remission, and he is status post splenectomy. He arrived hypoxic with a SpO2 of 85% and tachypneic with a respiratory rate of 30. It is obvious what is needed and he is immediately intubated successfully without complication in room 5 of the TCC. His post-intubation chest radiograph shows bilateral alveolar opacities congruent with multi-focal pneumonia, and acute respiratory distress syndrome (ARDS).
You walk out of room 5 and see an obtunded 22 year female vomiting in room 6. She is intoxicated and smells of alcohol, and a quick biopsy of her belongings shows several prescription bottles. You think this is likely a polysubstance overdose, and because of her mental status and lack of gag, you intubate for airway protection. Vomitus is around her vocal cords during direct laryngoscopy and she has significant secretions noted by the respiratory therapist. Her post-intubation chest radiograph shows a right lower lobe infiltrate, most congruent with aspiration.
As you exit room 5, you are called to room 4 because of the arrival of a 45 year old man who was trying to light his cigarette while bent over a brewing batch of methamphetamines. Shockingly, a small eruption occurred. Upon arrival, he is alert (also intoxicated) and you note that he has minimal burns, all located to his face, with singed hairs and soot in his mouth. You perform a quick fiberoptic examination of his upper airway, and note edematous and erythematous tissue in his posterior oropharynx, extending distally to his vocal cords. Given this inhalation injury and potential for upper airway obstruction, you intubate him, which is uneventful. His post-intubation chest radiograph is remarkable only for some interstitial edema.
You are happy that you have successfully performed perhaps the most defining trait of a well-trained emergency physician- securing an emergent airway. Your respiratory therapist is asking for ventilator orders, and she has set each patient at what appears to be pretty standard settings, with a tidal volume of 500mL for each patient. Noting that the median ICU wait time at your institution is around 6 hours, you wonder if this catch-all mechanical ventilation prescription is the most appropriate strategy. You are now considering the evidence regarding tidal volume for patients with ARDS, but especially for the two patients at risk for ARDS.
PICO Question:
Population: Mechanically ventilated adult patients at risk for ARDS, but without the syndrome.
Intervention: Low tidal volume (≈6mL/kg PBW).
Comparison: Conventional tidal volume.
Outcome: Incidence of ARDS early after ICU admission from the ED (i.e. within ≈5 days).
Search Strategy:
MEDLINE, EMBASE, CINHAL, and the Cochrane Library were searched using:
Acute Lung Injury
"Acute Lung Injury"[Mesh] OR "Acute Lung Injury" OR “Acute Lung Injuries” OR "Ventilator-Induced Lung Injury"[Mesh] OR "Ventilator-Induced Lung Injury” OR “Ventilator-Induced Lung Injuries” OR “Ventilator Associated Pneumonia” OR “ventilation induced lung injury” OR “VILI”
AND
Prevention
Prevent* OR prevention OR prophylax* OR prophylac* OR chemoprevent* OR thwart* OR "ward off" OR "ward-off" OR pre-emptive* OR preemptive* OR chemoprophyla*
AND
Outcome
(outcome* OR ((treatment* OR protocol*) AND (respond* OR response*)) OR failure* OR mortality OR fatal* OR death OR dead OR deaths OR "passed away" OR demise* OR Recurren* OR progression OR progressed OR relaps* OR growth OR grew OR growing OR regression OR survival OR cure OR cures OR "quality of life" OR qol OR morbidit* OR adverse OR "side effect" OR "side effects" OR event OR events OR nausea OR nauseous OR vomit* OR emesis OR comfort* OR pain OR painful OR painfree OR pain-free OR stress OR analges* OR "Outcome Assessment Health Care "[Mesh] OR "Mortality"[Mesh] OR "mortality "[Subheading] OR "Survival"[Mesh] OR "Survival Analysis"[Mesh] OR "Quality of Life"[Mesh] OR "Pain Measurement"[Mesh] OR "Health"[Mesh] OR "Health Status Indicators"[Mesh] OR "Health Status"[Mesh] )
AND
Adults
"Young Adult"[Mesh] OR “Young adults” OR “Young Adult” OR "Adult"[Mesh] OR “adults” OR “adult” OR "Middle Aged"[Mesh] OR “Middle age” OR "Aged"[Mesh] OR “Aged” OR Elder* OR "Aged, 80 and over"[Mesh] OR “Oldest Old” OR Nonagenarian* OR Octogenarian* OR Centenarian* OR "Frail Elderly"[Mesh] OR “Frail Older Adults” OR “Frail Older Adult”
AND
PubMed NOT Animal Studies
NOT (("Animals"[Mesh]) NOT ("Animals"[Mesh] AND "Humans"[Mesh]))
This yielded a total of 1,704 potentially relevant articles, of which 1,652 were excluded based on title on abstract. This left 52 full text articles, which were further reviewed to assess appropriateness and relevance to the journal club topic.
Bottom Line:
With an incidence of close to 200,000 patients annually (Rubenfeld 2005), a mortality rate of about 40% (Bersten 2002, Rubenfeld 2007), and prolonged physical and mental sequelae in survivors, it is clear that ARDS is one of the most important problems in critical care. A treatment still does not exist for the underlying pathophysiology of the syndrome, and low tidal volume ventilation remains the only consistent mortality benefit across syndrome severity. Given these facts, prevention of ARDS has become increasingly important, and the focus of increasing study and funding priorities.
A major question, and topic of controversy, is “Should lower tidal volume be used prophylactically to prevent ARDS in at-risk patients?” Tidal volume is a major risk factor for the development of ventilator-associated lung injury (VALI). This has been established in animals models of lung injury and human data in patients with ARDS.
ARDS is relatively common in the ED. Observational data and a small randomized controlled trial (Determan 2010) show that lower tidal volume may reduce progression to ARDS. This was further studied by two recently published systematic reviews (Serpa Neto 2012, Fuller 2013), which both concluded that the majority of data does point to a therapeutic benefit to low tidal volume ventilation to prevent ARDS.
Another major question is “Can the ED play a role in the treatment and prevention of ARDS?” Observational data shows that a significant minority of patients have ARDS in the ED (Fuller 2013). Clinical data also shows that progression to ARDS occurs early after ICU admission, with a median onset of 30 hours (Shari 2011). This temporal relationship suggests a potential causal link to treatment provided (or not provided) during the ED and early ICU course. Despite this, not a single trial of mechanical ventilation has ever been conducted in the ED. Further highlighting the importance of ARDS prevention, patients progressing to ARDS experience increased morbidity and mortality.
With respect to mechanical ventilation in the ED, there is limited data. What does exist tells us that tidal volumes commonly delivered are highly variable, and potentially injurious ventilation practices exist. This provides rationale for quality improvement, knowledge translation, and randomized trials in these ED patients.
The HINTS Exam in Vertigo
Feb 02, 2014
Journal Club Podcast #10: January 2014
Dr. David Newman-Toker, the authority on the HINTS exam, joins me to talk about oculomotor testing in acute vertigo...
A HINTS exam consistent with vertigo of central should have at least one of the following: a normal head impulse test (without a corrective saccade), nystagmus that changes direction on eccentric gaze, or a positive test of skew deviation (vertical ocular misalignment).
Normal head impulse test
Direction-changing nystagmus
Positive test of skew - Courtesy of Dr. Peter Johns
Peripheral Vertigo Video Links
A HINTS exam consistent with peripheral vertigo should have all of the following: an abnormal head impulse test (with a corrective saccade), nystagmus that does not change direction on eccentric gaze, and a negative test of skew deviation.
Abnormal head impulse test - Courtesy of Dr. Peter Johns
Unidirectional nystagmus - Courtesy of Dr. Kevin Kerber
Negative test of skew
While moonlighting in a small, community hospital one evening, you are presented with a 58 year-old gentleman complaining of vertigo. He was at home eating dinner 5 hours prior to arrival when he felt the room begin to suddenly, and violently, spin around him. He notes “I haven’t felt like this since college!” He reports becoming nauseated and vomiting several times, then getting up and “staggering” to his bed where he laid down and tried to “wait it out.” After several hours of constant vertigo, he attempted to get up to go to the bathroom and fell to the floor. He managed to grab his cellphone and called 911. He reports the vertigo has been constant, is worse with any change in head position, and is associated with nausea and imbalance. He denies recent URI symptoms, hearing changes, focal weakness or numbness, or speech changes. His past medical history includes hypertension and diabetes, controlled with amlodipine, metformin, and glyburide. On exam, he has horizontal beating nystagmus. Cerebellar exam - including finger to nose, heel to shin, and rapid alternating movements - is otherwise normal. He has an abnormal Romberg’s and is unable to stand or ambulate unassisted. The remainder of his neurologic exam is normal. Head CT, ECG, and labs are all normal.
Your differential includes two main concerns: either the patient has vestibular neuritis and should be treated symptomatically and discharged, or he has suffered a cerebellar stroke and requires transfer to a hospital with neurology consultation and MRI available. When the patient does not improve after receiving oral meclezine and IV diazepam, you bite the bullet and transfer him to Barnes-Jewish for further evaluation by the stroke team. On your way home the next morning, you begin wondering if there are any aspects of the physical exam that can differentiate between peripheral and central causes of vertigo. A quick search of the literature identifies something referred to as the “HINTS” exam, which involves oculomotor testing. Specifically, this test includes evaluation of horizontal Head Impulse testing, the direction of Nystagmus, and Test of ocular Skew deviation. You begin delving deeper to determine if this is something you should be using in your practice…
PICO Question:
Population: Adults with new-onset, acute vertigo with otherwise non-focal neurologic exam
Outcome: Diagnostic accuracy, morbidity or mortality related to misdiagnosis
Search Strategy:
An advanced PubMed search was conducted using the terms "(HINTS OR oculomotor OR vestibuloocular) AND (vertigo or dizziness)," limited to humans and the English language, resulting in 142 citations (http://tinyurl.com/mktaevm). Original studies that reported sufficient data to construct 2X2 contingency tables were chosen for analysis. The bibliographies of relevant articles were searched for additional references. Three articles that specifically addressed the diagnostic accuracy of the HINTS exam were identified. An additional article was selected that assessed the 3 components of HINTS as well as vertical smooth pursuit, but allowed for calculation of the accuracy of the HINTS exam alone.
Bottom Line:
Dizziness remains a common chief complaint in US emergency departments, leading to approximately 4 million visits every year (Saber Tehrani 2013). The emergency physician’s first duty in such cases is to distinguish benign peripheral causes of vertigo from more serious, potentially life-threatening, central causes. Making such a determination can be difficult: focal neurologic signs are absent in as many as 20% of cases of posterior circulation stroke (Tarnutzer 2011); computed tomography (CT) is frequently normal early in the course of posterior circulation stroke (Edlow 2008); and magnetic resonance imaging (MRI), often considered the reference standard for stroke, is associated with a significant number of false negatives when the posterior circulation is involved (Oppenheim 2000, Morita 2011).
Emergency physicians have identified the importance of a clinical decision rule to help differentiate central from peripheral etiologies of vertigo (Eagles 2008). The HINTS exam has been proposed as a means of making such a differentiation. This test involves 3 components:
1) Horizontal head impulse testing involves rapid head rotation by the examiner with the subject’s vision fixed on a nearby object (often the examiner’s nose). In cases of peripheral vertigo, a corrective saccade should be observed, and is considered a positive test. There is typically no corrective saccade in cases of central vertigo.
2) Evaluation of nystagmus will typically yield a fast phase which is unidirectional in peripheral vertigo, and beats away from the affected side. In central vertigo, the direction of the fast phase may change on eccentric gaze.
3) Alternate eye cover testing in patients with peripheral vertigo should result in no skew deviation or ocular tilt. Ocular misalignment and skew deviation (with or without ocular tilt) is frequently seen in patients with posterior fossa abnormalities (i.e. brainstem strokes).
While its individual components do not reliably differentiate central from peripheral causes of vertigo, an exam consisting of all three elements has been proposed to do so. In theory, if any of the components indicates a central pathology, then the exam is considered positive for a central etiology. All three components must be consistent with a peripheral etiology for the exam to be considered negative.
The current literature supporting the HINTS exam consists of four articles, three of which included patients from a single, ongoing prospective cross-sectional diagnostic study of patients with acute vestibular syndrome (AVS). The first article (Kattah 2009) included 101 patients with acute vertigo, of whom 76 were diagnosed with a central lesion. The diagnostic test characteristics of the HINTS exam for central vertigo were as follows: the sensitivity was 100% (95% CI 95.2-100.0), specificity was 96% (95% CI 79.6-99.3), likelihood ratio positive (LR+) was 25 (95% CI 3.66 to 170.59), and LR negative (LR-) was 0.00 (95% CI 0.00 to 0.11). Interestingly, the HINTS exam outperformed the initial MRI with diffusion-weighted imaging, which had a sensitivity for stroke of only 88%.
Similar diagnostic properties were identified in the 2nd paper (Newman-Toker 2013) which compared the accuracy of the HINTS exam to the ABCD2 score in 190 patients from the cross-sectional cohort. The ABCD2 score is a clinical prediction rule to predict short-term stroke risk following a transient ischemic attack. While this comparison seems contrived and unfair, the HINTS test performed quite well, with a sensitivity of 96.8% (95% CI 92.4-99), a specificity of 98.5% (95% CI 92.8-99.9), a LR+ of 63.9 (95% CI 9.13-446.85), and LR- of 0.03 (95% CI 0.01-0.09).
The 3rd study from this database (Newman-Toker 2013) used a small sample of 12 patients to evaluate the HINTS exam aided by a video-oculography device, which was used to record head and eye velocity measurements during head impulse testing. Physician interpretation of these reading was compared to an algorithmic interpretation, with 100% agreement. The aided HINTS exam demonstrated a high-degree of accuracy in the diagnosis of central vertigo, with a sensitivity and specificity of 100% (95% CI 54.1-100.0%), LR+ of ∞, and LR- of 0.
A 4th article was identified in which oculomotor testing was performed by neurologists, following completion of 4 hours of training specific to exam techniques and interpretation. Twenty-four patients admitted to the stroke unit were included in the study, of whom 10 were diagnosed with central vertigo. The sensitivity of the HINTS exam for stroke was 100% (95% CI 69.0 to 100.0), specificity was 85.7% (95% CI 57.2-97.8), LR+ was 7.0 (95% CI 1.9 to 25.3), and LR- was 0.
Several concerns were raised with regards to the current evidence. First, in these studies the HINTS exam was performed by specialists: neuro-ophthalmologists in two studies, neuro-otologists in one study, and neurologists with four hours of exam-specific training in the 4th study. This calls into question the external validity of the study, as the accuracy and reliability of HINTS testing in the hands of emergency physicians has not been evaluated. However, with some degree of training it is reasonable to expect emergency physicians to be able to perform the HINTS exam as proficiently as our specialist colleagues. When ED ultrasound was first being introduced, one of the primary concerns was that "only trained radiologists" could perform ultrasound and hence this was out of the jurisdiction of the ED. Over the years, we have not only proven capable of using ultrasound effectively in emergency medicine, but ultrasound is now a requirement of emergency medicine residency training. The question will be whether the amount of training necessary to become proficient with the HINTS exam will be worth the effort.
A second concern raised was that the patient populations in these studies were of moderate to high risk of central vertigo, with prevalence ranging from 42 to 75%. While the prevalence was high in all of these studies, these were still fairly heterogeneous groups of patients with variable risk (age ranges of 18-92, 26-92, 42-83, and 30-73) and hence did include some patients we would likely consider low risk. Some of the patients with stroke as the cause of symptoms were young (15 patients < 50 years of age in the study by Kattah et al). I would argue that some of these patients with central lesions would be treated as peripheral vertigo and discharged without advanced imaging at most institutions, and an abnormal bedside test would potentially lead to admission and further testing, and reduce the rate of missed stroke. In patients with low probability of disease, an abnormal HINTS exam may increase the pre-test probability of disease above the test threshold for MRI or admission. This is especially true at institutions without MRI or neurologic consultation available, where transfer to another hospital for admission and further work-up would be required to assess for a central etiology. In some moderate risk patients, a negative HINTS exam may reduce the probability of central etiology below the test threshold, and obviate the need for further work-up. For example, using the upper limits of the 95% CI for the negative LR from the largest study (Newman-Toker 2013) of 0.09, a patient with a pre-test probability of 25% for a central etiology who has a negative HINTS would have a post-test probability of 2.9%, and hence the decision may be made to not proceed with further work up.
Further testing of the HINTS exam will need identify a more concrete role for this test. The accuracy and reliability of the test in the hands of trained emergency physicians will need to be assessed, as will the impact of the test on both cost and patient care. If use of the test does not lead to either reduction in unnecessary imaging, reduction in the rates of missed posterior circulation infarction, or both, then it will not be worth the effort to train physicians in its performance. The role of the video-oculography device will also need to be further assessed in larger studies with more precise estimates of diagnostic accuracy to justify its cost.
Does My Patient Have Mesenteric Ischemia?
Jan 09, 2014
Journal Club Podcast #9: December 2013
Brian Hiestand joins me via Skype from Wake Forest to discuss his recently published meta-analysis on the diagnosis of acute mesenteric ischemia...
Pediatric Volume Resuscitation in the Developing World
Dec 04, 2013
Journal Club Podcast #8: November 2013
Chris Carpenter sits down with Indi "we named the dog Indiana" Trehan and SueLin Hilbert to talk about pediatric volume resuscitation in the developing world...
The infamous SueLin Hilbert has brought you to Ghana to help staff the emergency department and train local Ghanaian physicians in the art and science of emergency medicine. Aside from the local beer, you are loving the experience and you are making a big positive impact on clinical care in this very underserved population. One night, you are covering the ED alone when a 4-year-old girl arrives in septic shock. She is febrile, hypotensive, tachycardic, barely responsive to painful stimulation, and has delayed capillary refill. You are not sure what the underlying diagnosis is and cannot find a specific diagnosis due to the fact that it is the middle of the night and the malaria lab techs are gone for the night and you are out of rapid malaria diagnostic tests. But, it is the rainy season and you are seeing tons of malaria, so you assume the child has malaria sepsis most likely or less likely Gram-negative bacterial sepsis.
Nevertheless, it is clear to you she is in septic shock. You manage to place an EJ line after the nurses fail to obtain peripheral access and you aggressively resuscitate her with normal saline at a dose of 60 mL/kg over half an hour, broad-spectrum antibiotics, and supplementary oxygen. You consider albumin and blood transfusions, but by the time you get back to this patient (since you have an ED full of other sick patients), you learn that her respiratory status worsened, she became more pale and cold, and ultimately died. Her mother is shrieking and crying. You are at the edge of tears since this is certainly not the first child that has died on your watch today.
In the morning after your shift, you go home, have a stiff drink, and ponder what you may or may not have done wrong. After discussing it with SueLin in the morning, she makes you think about whether the choice and volume of fluids you administered were ideal. You get on Pubmed to consider the role of fluid resuscitation in septic shock in the developing world and come across the following articles:
PICO Question:
Population: Children with septic shock due to severe infection in the developing world.
Intervention: Fluid resuscitation with alternate fluids or rates.
Comparison: Standard resuscitation with 20 mL/kg normal saline.
Outcome: Survival and time to recovery from shock.
Search Strategy:
PubMed was searched on October 11, 2013, using the strategy: (sepsis or malaria) AND (fluid resuscitation). This gave 1182 results. The “English” language filter was applied. This reduced the total to 1064 results. The “child” age filter was applied. This reduced the total to 169 results. The “10 years” publication date filter was applied. This reduced the total to 85 results (http://goo.gl/cdn1r1). The abstracts of these 85 articles were reviewed to assess for relevant studies conducted in the developing world.
Bottom Line:
Circulatory shock is a significant problem in pediatric emergency medicine and critical care. The leading culprit is hypovolemia, usually due to life-threatening infection. Consequently, current guidelines for the acute management of severe sepsis in pediatric (and adult) patients emphasize early, rapid, and substantial infusion of intravenous fluids. The optimal fluid choice, volume of fluid, and route of administration has been the topic of debate for 175 years. Indeed, this topic was the focus of at least two recent systematic reviews. (Akech 2010, Ford 2012) Nonetheless, this remains a contentious issue as evidenced by numerous letters and commentaries. (Ford 2011, Ribeiro 2011, Berend 2011, Joyner 2011, Kissoon 2011, Scott 2011, Myburgh 2011, Hilton 2012)
The highest-quality (least biased) evidence to date is the Fluid Expansion as Supportive Therapy in Critically Ill African Children (FEAST) trial, although thus far no guideline revisions reflect the findings of this study. Although the FEAST trial represents landmark research, global medicine experts at our Journal Club identified several potentially serious flaws with it. First, the trial closed prematurely (with pros and cons), although the decision appears justified on ethical grounds. The authors provided much more detail about this decision in a subsequent manuscript. Second, the investigators failed to provide or analyze the cause of death. One Journal Club attendee noted that a webinar conducted by several of the FEAST site investigators analyzing these findings identified “cardiac collapse” as the cause of death in most of these children. Identifying the cause of death will be important to guide subsequent management trials, as well as to fully understand the implications of this FEAST trial. Third, the authors failed to use the World Health Organization criteria for shock since the role of physical exam to stratify severity of illness in normotensive children is unproven. Whereas the WHO requires the presence of delayed capillary refill, weak pulse, and tachycardia to establish the diagnosis of “shock”, these investigators only required one of the three. Were the children in this FEAST trial septic shock patients? One editorial suggests that FEAST was “probably treating children with serious febrile illnesses due to the most common medical problems, namely pneumonia and malaria, but not hypovolaemic shock.”
What did the FEAST trial conclude? Routine IVF bolus therapy in clinically undifferentiated severely ill non-hypotensive febrile children with diminished perfusion increases 24-hour mortality whether normal saline or albumin is used. Increased mortality occurs regardless of malaria status, coma, severe anemia, base deficit, or lactate level. Hypothetical mechanisms include non-blood product fluid resuscitation in severe anemia, rapid reversal of compensatory vasoconstrictor response, reperfusion injury, subclinical pulmonary compliance effects, sepsis-relatedmyocardial function, or intracranial pressure. Before extrapolating the FEAST findings to the developed world, future research would need to explore similar fluid resuscitation strategies in the context of readily available ICU and mechanical ventilation.
The rest of the evidence that we evaluated was of lesser quality and more focused on specific disease processes (malaria) or fluid types (Dextran vs. hydroxyl ethyl starch vs. normal saline or lactated ringers). The PGY-1 manuscript demonstrated that in pediatric patients with severe malaria and moderate to severe acidosis in the developing world, IV albumin at 20-40 cc/kg plus standard antimicrobial and supportive therapy is superior to IV normal saline with an adjusted Number Needed to Treat (NNT) of 3 to prevent one death based upon a baseline mortality rate of 11%. Albumin is more effective in malaria patients presenting with coma.
The PGY-2 manuscript demonstrated that in children with shock in India, more aggressive fluid and dopamine resuscitation within the first 20 minutes of ED arrival does not decrease mortality or increase intubation rates. If clinical equipoise remains despite these findings, future researchers should evaluate settings with >1 ED clinician and more ready access to ventilators to more accurately assess the internal and external validity of this intervention.
The PGY-3 manuscript demonstrated that in severe (impaired consciousness or respiratory distress) pediatric malaria in the developing world, hydroxyl ethyl starch (HES) and Dextran are both safe for acute volume expansion therapy and no adverse outcomes observed among 80 patients. However, impressive trends were observed favoring HES (compared with Dextran) to reduce mortality and resolve acidosis at eight hours with NNT=5.
The consensus of our Pediatric EM and Critical Care Medicine experts was that the lack of access to ventilator support and Intensive Care Unit management in the FEAST trial limit prohibit extrapolation of these findings to the developed world at places like St. Louis Children’s Hospital. As far as the experts were aware, no ongoing or future studies were planned to further evaluate the efficacy of aggressive fluid resuscitation in pediatric septic shock patients and neither guidelines nor local protocols would change based upon the FEAST trial. Nonetheless, an expanding volume of research in the developed world using contemporary ICU management indicates that excessive fluid resuscitation contributes to preventable morbidity including sepsis, ARDS, acute kidney injury, and major surgery. Several ongoing trials will assess the effectiveness of early goal directed therapy (which includes aggressive fluid resuscitation) in septic shock management. However, Washington University School of Medicine global medicine experts Dr. Indi Trehan and Dr. SueLin Hilbert noted that the FEAST trial radically altered the approach to septic shock resuscitation in the developing world in that undifferentiated rapid fluid resuscitation is no longer the standard of care.
One unintended consequence of this Pediatric EM-Pediatric Critical Care-Emergency Medicine Journal Club event was recognition of a potential niche in diagnostic research. Specifically, the reproducibility of bedside features of shock (and of history and physical exam in most of Pediatrics, as well as systematic reviews of diagnostic accuracy) is under-researched, largely unknown and poorly reported. How accurate and reliable are pediatric clinicians’ physical exam skills for detecting “severe sepsis”, organomegaly, etc.? This represents a tremendous opportunity (i.e. unfilled academic niche) for diagnostic researchers to contribute to the JAMA Rational Clinical Exam and Academic Emergency Medicine Evidence Based Diagnostics series.
Cardiac Cath Following Cardiac Arrest in Patients Without STEMI
Nov 11, 2013
Journal Club Podcast #7: October 2013
This month, the outspoken Chandra Aubin and hold hands and talk about which patients should undergo urgent cardiac cath following cardiac arrest...
While on your EMS rotation, you ride along on a call for a 52-year old gentleman with cardiac arrest. On arrival to the patient's home, bystander CPR is underway. You learn that 10 minutes prior to your arrival, the patient was running on his treadmill when he suddenly clutched his left shoulder. He turned off the treadmill, told his wife and daughter he didn't feel well, and collapsed to the floor. His 17-year old daughter (a lifeguard) attempted to feel for a pulse while his wife called 911. The daughter was unable to palpate a pulse and began CPR.
The paramedics continue CPR while you attach the cardiac monitor. Seeing that the rhythm is ventricular fibrillation, you place the pads and defibrillate the patient with success on the first attempt. He now has a pulse, and his BP is 85/40 with a pulse of 115. After placing an IV and intubating the patient, you then transport him to the Barnes-Jewish ED. His ECG on arrival reveals sinus tachycardia with normal intervals and deep T-wave inversions in leads V1-3, with no ST elevation. His BP is now up to 110/60, and his pulse remains 115. The ED team begins placing an ICY line to induce mild therapeutic hypothermia, sends labs, and begins the process of admission to the ICU.
You feel fairly certain, based on the history provided by family, that the underlying cause of this patient's cardiac arrest was an acute myocardial infarction. Despite the lack of ST-elevation the ECG, you wonder if the patient would benefit from early cardiac catheterization to assess for significant acute coronary artery occlusion. You head to the offices on the 8th floor, find a computer, and begin a search to see what the literature says...
PICO Question:
Population: Adults patients with out-of-hospital cardiac arrest who achieve return of spontaneous circulation without evidence of STEMI on the initial ECG (with ventricular arrhythmia vs. any rhythm).
Intervention: Early cardiac catheterization.
Comparison: Standard care, delayed cardiac catheterization, no cardiac catheterization.
Outcome: Findings of acute coronary artery occlusion, survival to hospital discharge, good neurologic recovery, quality of life.
Search Strategy:
PubMed was searched using the strategy "(cardiac arrest) AND (early OR immediate) AND ((cardiac catheterization) OR (coronary angiography) OR (coronary intervention))" (http://tinyurl.com/lke4so9), resulting in 602 articles, from which the following 4 articles were chosen.
Bottom Line:
A large meta-analysis of data on outcomes from out-of-hospital cardiac arrest (OHCA) revealed that less than one-third of patients with return of spontaneous circulation (ROSC) survive to hospital discharge (Sasson 2010). While this study pre-dates the use of interventions shown to improve survival with good neurologic outcome, such as therapeutic hypothermia (Holzer 2005, Kim 2012), the data does suggest the need for improvements in the care of post-arrest patients. The 2010 American Heart Associations Guidelines for Post-Cardiac Arrest Care describe therapeutic hypothermia and “treatment of the underlying cause of cardiac arrest” as the primary initiatives to improve survival and neurologic outcome. Acute myocardial infarction has been demonstrated to be the likely cause of arrest in anywhere from one-third (Spaulding 2007, Anyfantakis 2009) to 61% (Chelly 2012) of patients admitted to the hospital. In patients with ST-elevation myocardial infarction (STEMI) on ECG following ROSC, the International Liaison Committee on Resuscitation (ILCOR) clearly recommends that reperfusion therapy be attempted. In patients without STEMI, ILCOR is less clear, though, and recommends only that one “consider immediate coronary angiography in all post-cardiac arrest patients in whom ACS is suspected.”
One systematic review looked at studies on coronary angiography in OHCA (Larsen 2012). A meta-analysis of 10 studies in which coronary angiography was performed in select patients revealed an unadjusted odds ratio for survival of 2.78 (95% CI 1.89-4.10). While this finding suggests improved outcomes with coronary angiography, selection bias likely made a large contribution given the selective, observational nature of these studies. The authors also identified 5 studies that assessed the use of coronary angiography systematically in all survivors of OHCA without an obvious non-cardiac cause, and found that the presence of significant coronary artery disease was high, ranging from 59-71%, despite only 31-63% of included patients presenting with STEMI or presumed new LBBB. This suggests that a significant subset of patients without STEMI had significant coronary occlusion, and could potentially benefit from PCI. While this systematic review suggests that there is a subset of patients in whom coronary occlusion is the underlying cause for the arrest, but in whom STEMI is absent on the post-resuscitation ECG, it is unclear which patients will benefit from attempts at coronary reperfusion.
In the Parisian healthcare system, all patients successfully resuscitated after OHCA are transported immediately for coronary angiography (Laurent 2002, Dumas 2010, Chelly 2012). Dumas identified 435 patients with ROSC following OHCA without an obvious non-cardiac etiology. Among patients with STEMI, 96% had at least one significant coronary stenosis on angiography; in patients without STEMI, 58% had at least one significant coronary stenosis. While this latter number would indicate that cardiac catheterization was necessary in the majority of such patients, it is important to note that only 26% of these patients underwent successful percutaneous coronary intervention (PCI). The authors primary outcome was survival in patients undergoing successful PCI compared to those with no or failed PCI; they found survival rates of 51% vs. 31% respectively (p < 0.001) for a relative risk of 1.62. The authors’ conclusion, that “immediate PCI seems to offer survival benefit” (p. 206) fails to address one important concern: the difficulty in predicting successful PCI in non-STEMI patients prior to coronary angiography. A more accurate conclusion may be that if you are going to have a cardiac arrest, make sure you have a lesion amenable to PCI.
In an attempt to reduce the rates of unnecessary cardiac catheterization, another Parisian study attempted to identify ECG findings predictive of angiographically defined acute myocardial infarction (AMI) in OHCA patients (Sideris 2011). They found that the combination of ST-elevation, ST-depression, left bundle branch block (LBBB), or nonspecific QRS widening had a sensitivity of 100%, a specificity of 47%, a positive predictive value of 52%, and a negative predictive value of 100%. Of 46 patients without ST-elevation who were positive by ECG criteria, only 7 (15%) had angiographically defined AMI; the other 39 underwent an unnecessary cardiac catheterization. While the authors conclude that use of this rule would have resulted in 30% of the cohort avoiding unnecessary cardiac catheterization, use of such a rule in most US institutions would instead lead to a significant increase in cardiac catheterization rates following OHCA. The benefit of such an increase would need to be verified prior to its implementation in light of the increased cost and risks of the procedure.
Only one article was identified that looked specifically at cardiac catheterization in patients without STEMI (Hollenbeck 2013). In this retrospective, observational study of comatose patients resuscitated from OHCA due to ventricular tachycardia (VT) or ventricular fibrillation (VF), patients were assessed based on whether they underwent cardiac catheterization (CC) early (< 24 hours after ROSC) or either underwent late CC or no CC during hospitalization. Overall survival to hospital discharge was higher in the early CC group compared to the control group: (65.6% vs. 48.6%, p = 0.017) with an adjusted odds ratio for death of 0.35 (95% CI 0.18-0.70). Surprisingly, among those in the early CC group, successful PCI itself was not associated with improvement in survival rates (60% for those with successful PCI vs. 68.3% for those without successful PCI, p = 0.386). This suggests that other factors led to the improved outcomes seen in the early CC group, rather than catheterization itself. It is possible that the increased use of other interventions, such as anti-thrombin and antiplatelet agents, could have led to improved outcomes in the early CC group. More likely, given the retrospective nature of the study, it seems plausible that selection bias played a significant role: patients suspected of having a better prognosis may have been referred for CC, whereas those in whom aggressive care was felt to be futile would be treated more conservatively. Additionally, certain important prognostic indicators were not addressed in this study. Baseline characteristics provided were primarily related to the arrest itself (witnessed arrest, bystander CPR, time to ROSC) and did not address prior history of coronary artery disease or the presence of pre-existing comorbidities, such as cancer, that could lead to the implementation of less aggressive care.
While it seems likely that coronary occlusion and acute MI is responsible for a significant proportion of patients with ROSC following OHCA, even in the absence of ST-elevation, the existing literature provides little direction as to which patients would benefit from cardiac catheterization. The data suggests that patients with conduction defects or ECG changes consistent with ischemia are more likely to have significant coronary obstruction; unfortunately, there is no evidence that performing routine catheterization in patients with such ECG findings improves overall outcomes. The data also does not currently support routine catheterization in patients with ventricular fibrillation or ventricular tachycardia, given the methodological flaws in the Hollenbeck paper. Consideration should still be given to selective catheterization in patients with a history concerning for acute MI preceding arrest, especially in the presence of ischemic ECG changes. Further studies will need to prospectively evaluate the use of angiography in a pre-defined population of patients without STEMI in to assess its efficacy in subsets of this population.
Outpatient Management of Acute PE
Oct 07, 2013
Journal Club Podcast #6: September 2013
This month, Chris Carpenter and I chat about the safety and feasibility of managing acute pulmonary embolism in the outpatient setting...
Your emergency department’s Urgent Care area – the land of the relatively healthy. Your first mission of the day is a healthy undergraduate teenager referred from the campus clinic for evaluation of acute onset, unilateral, pleuritic chest pain. The referring physician’s specific concern is pulmonary embolism (PE). Your young patient denies any significant past medical or surgical history, including no (personal or family) history of prior venous thromboembolism. However, she does use birth control pills. She has had no recent chest trauma, upper respiratory infection, or skin rash.
Her vitals are BP 120/80, P 60, RR 16, T 37.1°C, and her room air pulse ox is 100%. She is lean and muscular without any other remarkable findings on physical exam, including no chest wall tenderness. Active twisting/bending range of motion of her trunk and chest wall is pain-free for her. You follow the Washington University pulmonary embolism diagnostic protocol and because she uses birth control pills, she is non-low risk by the PERC criteria. Therefore, per the protocol, you order a D-dimer, which is elevated, and a subsequent PE protocol CT demonstrates a peripheral PE. You notify the patient of these findings, call the referring physician to notify her, and prepare to admit the patient. However, the admitting service calls you to discuss outpatient management of PE, which the Hospitalist insists is a common practice and based upon high-quality research. You decide to review the literature yourself before discharging your young patient home.
PICO Question:
Population: Adult patients with pulmonary embolism
Outcome: Morbidity, mortality, ED recidivism, cost, side effects
Search Strategy:
You use PUBMED to conduct a “broad” therapy study Clinical Query using the search term “pulmonary embolism” yielding 16242 citations which you subsequently combine with the search terms “emergency*” and “outpatient management” (27 citations – see http://tinyurl.com/m8nq8yg). The first citation is a recent meta-analysis which points you to all of the other references, including two observational trials and one randomized controlled trial.
Bottom Line:
PE Epidemiology and Overdiagnosis
According to Rosen’s textbook of emergency medicine, approximately 1 in every 500 to 1000 (0.1%-0.2%) ED patients have a pulmonary embolism (PE). Autopsy data ranges from 0.07% (Silverstein 1998) to 0.2% (Anderson 1991, Hansson 1997), although experts believe that clinical estimates of PE incidence underestimate the true value as opposed to autopsy, which often overestimates the true incidence (White 2003). In fact, some autopsy data suggests that 60% of consecutive individuals have PE if we look hard enough. Indeed, pundits increasingly suggest that contemporary CTs may too accurately diagnose PE’s – meaning that clinically insignificant PEs are being detected by modern CT scanners (i.e. PE not the cause of the patient’s symptoms, PE not destined to cause patient death or permanent disability).
In support of this observation, there is a significant temporal trend of increased PEs diagnosed since CT became widely available in 1998 in the United States and Australia. If clinically significant PEs were truly becoming more common since 1998 (as opposed to being overdiagnosed due to overtesting), then PE-related mortality should be increasing, but it is stable over the last 40-years – thus meeting one defining element of “overdiagnosis” (Hoffman 2012, Moynihan 2012, Carpenter 2013, Preventing Overdiagnosis Consortium).
Furthermore, we are harming patients in the attempt to diagnose 100% of PEs. Newman estimates that in the pulmonary embolism rule-out criteria study, testing for PE prevented 6 deaths and 24 major/non-fatal PE events, while causing 36 deaths and 37 non-fatal major medical harms (renal failure, major hemorrhage, cancer). Overtesting inextricably links to overdiagnosis and in the case of PE, ↑ testing →↑harm. Harms extend beyond iatrogenic injury, too, including patient anxiety for a PE diagnosis (of which they never would have been aware with no ill effect in many cases), as well as current and future costs (individual patient insurance premiums). Per-patient inpatient admission costs for PE in the United States ranged from $25,000 to $44,000 between 1998 and 2006 with post-hospitalization warfarin and lab testing estimated at $2694.
Roots of the Overdiagnosis Problem
Why is there a problem with overtesting for PE? Multiple reasons exist in the United States, including
Based upon two studies (Goldhaber 1999, Kline 2003), Rosen’s emergency medicine textbook reports that 10% of ED patients die within 30-days of diagnosis, even if promptly diagnosed and treated. However, this mortality estimate lumps all PE patients (sub-segmental versus segmental versus saddle embolus) into one large group and assumes that mortality is secondary to the PE. Based upon data from Kline, Newman estimated that PE-related mortality is 0.2%. What we really need to know is which PE patients benefit from anticoagulation? Unfortunately, CT does not usually provide us with that answer so what options are available to maximize the risk-to-benefit ratio of PE testing and treatment?
What Can We Do?
The first line defense against PE overdiagnosis is to use evidence-based diagnostics to guide which patients to evaluate with D-dimer and advanced imaging. We discussed this extensively at the August 2011 Journal Club (see http://tinyurl.com/ED-PE-Testing) which was based on 5 prior Journal Clubs and includes an algorithm that was accepted by Wash U Risk Management, Radiology, and Emergency Medicine, as well as the majority of the emergency departments across the St. Louis metropolitan area. Note that the algorithm includes a recommendation to contemplate V/Q scanning rather than PE. Why? Only 1% of “high probability” V/Q scans correspond to isolated sub-segmental PE as compared with 15% of CT pulmonary angiograms.
The second line of defense against PE overdiagnosis-related overtreatment in the ED is to risk stratify patients once we have diagnosed acute PE since some of them may be safely discharged home. Some EM faculty argued for more ED-based testing of PE patients “using a thoughtful approach”, including troponin, ECG, BNP, and echocardiography as surrogate markers of right ventricular strain. However, the subset of PE patients who benefit from additional testing remains undefined and could lead us down yet another path towards overtesting →overdiagnosis → overtreatment so this opinion did not reflect the majority.
At least two PE risk stratification instruments exist, but our group agreed that the Pulmonary Embolism Severity Index (PESI) was the preferred risk stratification tool (based upon current evidence [Donzé 2008, Choi 2009] and in order to replicate the highest quality ED-based outpatient PE management evidence trials). The PESI can be computed online.
Score Class 30-day PE-related mortality
≤ 65 I 0-1.6%
66-85 II 1.7%-3.5%
86-105 III 3.2%-7.1%
106-125 IV 4.0%-11.4%
>125 V 10.0%-24.5%
If a subset of PE patients are discharged home, PESI Class I patients are the most obvious target.
Is Outpatient Management Safe and Effective?
ED physicians currently discharge 1% of PE patients, but are asked to do so by admitting services in 21% of cases. One recent systematic review identified 8 studies that diagnosed acute PE and discharged a portion home. Seven of these studies were observational (i.e. not randomized controlled trials), only 4 included ED patients and only 1 included U.S. patients. All of the studies had extensive exclusion criteria, including social factors (living alone, indigent, lack of transportation for outpatient follow-up) that prohibited outpatient PE management – all of which are reflected in the protocol that we derived with Hospitalists (see below). In addition, all of the available studies used low molecular weight heparin + warfarin (not dabigatran or one of the other newer anticoagulation therapies) so this data cannot be extrapolated to these newer agents. No cases of venous thromboembolism (VTE) or hemorrhage-related death were noted across 7 studies with 90-day follow-up (0%, 95% CI 0-0.62%), while one study reported two deaths at 180-days (if this occurred at 90-days, estimated risk 0.26%, 95% CI 0-1%). Recurrent VTE ranged from 0-6.2% and non-fatal hemorrhage ranged from 0-1.2% at 90-days. Another systematic review published in August 2013 confirms the safety of outpatient PE management, if home circumstances are adequate.
The single randomized trial was an open-label non-inferiority trial across 19 EDs in four countries (including the U.S.) from 2007-2010. They included non-pregnant, PESI Class I or II patients without renal dysfunction, recent bleed, or social issues. They hypothesized a non-inferiority margin of 4% and in the per-protocol analysis outpatient management was non-inferior to inpatient care for 90-day recurrent VTE (0.6% difference, upper limit of 95% CI 2.9%, p = 0.014). However, the upper limit of the confidence interval for major bleeding exceeded the 4% threshold at 90 days (3/163 = 1.8%, upper limit of 95% CI 4.5%) so outpatient PE management is not non-inferior to inpatient management with respect to major bleed risk. No major bleeds occurred in the inpatient group. The three major bleeds in the outpatient group were intramuscular hematomas on days 3 and 13 and one episode of menometrorrhagia on day 50. No differences in mortality were identified (0%, upper limit 95% CI 2.1%). No differences in patient satisfaction were observed between inpatient and outpatient management (92% outpatients versus 95% inpatients satisfied or very satisfied). Hospitalization time was 0.5 days in the outpatient group versus 3.9 days in the inpatient group.
· Which risk-stratification instrument should be used?
· Is LMWH available to destitute ED patients 24/7?
· Who will provide LMWH teaching and is this instruction reliable?
· How will follow-up be assured and what QI process will close the loop?
· Our Hospitalist colleagues question whether future studies of outpatient PE management evaluate whether inpatient is superior to outpatient, rather than whether outpatient management is non-inferior to admission.
In terms of facilitating shared decision making with PE patients who might be appropriate for outpatient management, the consensus was that a statement like the following was appropriate: “A single moderate quality study from 19 ED’s in Europe and the U.S. demonstrates that certain PE patients can be safely and effectively treated with blood thinners at home, although there is a chance of increased bleeding risk at 90 days (< 5% at most) with home management.”
The role of alternative shared decision-making models to summarize study results from non-inferiority studies (such as Cates plots, site 1 and site 2), natural frequencies, or number needed to treat) remains uncertain. Hospitalists and emergency medicine ultimately agreed upon the attached algorithm, which is also reproduced below.
Clinical Decision Rules in Low-Risk Chest Pain
Aug 29, 2013
Chief resident Tim Koboldt sits down with me to talk about low-risk chest pain, knowledge translation, and breaking down barriers...
You are working a standard Monday afternoon shift in EM1 and have just come out of room 7 where the patient has informed you that he is King of the Gremlins when you see a new patient with chief complaint of chest pain pop up on the board. You go to see the patient, Mr. X who is a 39 year old, nonsmoking male with no prior medical history. He complains of substernal chest pain starting two hours ago while working in his yard. He states the pain was left sided, no radiation, some mild shortness of breath. Symptoms have now resolved. Workup shows a normal EKG and troponin negative x1. You think about further workup for this patient who has been given aspirin and is now pain-free. The Observation Unit is unfortunately double-booked since Dr. Seltzer is working in EM2 so you consider your other options. Are there any clinical prediction rules that can help risk-stratify this patient and help you decide on a disposition? You “borrow” Dr. Gilmore’s iPad from the TCC chart rack and commence a literature search...
PICO Question:
Population: Adult patients presenting to the Emergency Department (ED) with chest pain in whom there is clinical concern for acute coronary syndrome (ACS)
Intervention: The application of a Clinical Decision Rule (CDR) to assess patients at such low risk that discharge home without stress testing is warranted
Comparison: Standard practice and clinical gestalt
Outcome: Death, myocardial infarction, life-threatening arrhythmia, quality of life
Search Strategy:
After searching PubMed using multiple strategies and identifying an abundance of articles addressing various decision rules for low-risk chest pain, you instead email Dr. Erik Hess, Associate Professor of Emergency Medicine and Critical Care at the Mayo Clinic in Rochester Minnesota. A known expert in acute coronary syndrome research, he recommends the four selected articles to address the topic.
Bottom Line:
Chest pain remains a common chief complaint among patients presenting to the Emergency Department, accounting for more than 10 million visits annually in the US. According to data from the Physician Insurers Association of America, more than a quarter of all money paid in closed malpractice claims from 1985 to 2003 involved patients with a chief complaint of “chest pain.” Given the high risk of malpractice, and increased morbidity and mortality associated with missed diagnosis of ACS, many Emergency Physicians have a low threshold to perform extensive testing in these patients, frequently including provocative testing. The 2010 American Heart Association Scientific Statement on testing in low-risk chest pain patients in the ED supports this practice, recommending confirmatory testing in patients with negative or nondiagnostic ECGs and negative serial cardiac biomarkers prior to hospital discharge. Such confirmatory testing can include exercise treadmill testing, myocardial perfusion imaging, or coronary angiography (invasive or computed tomography).
Unfortunately, this recommendation does not take into account the potential downsides to such confirmatory testing in low-risk patients, and does not consider the impact of the test threshold on clinical decision making. Potential downsides include both monetary issues (cost of the test, prolonged hospitalization, time off work) and false positive risk. A simple exercise can demonstrate the high theoretical incidence of false positive stress testing in low risk patients. Using reported accuracy measures for treadmill stress echocardiography (positive LR 7.94 and negative LR 0.19, Banerjee 2012), we can draw 2X2 tables for a hypothetical group of 1000 patients with a prevalence of coronary artery disease (CAD) of 5% (Table 1). We see that of the 50 patients with CAD, 41 will have a positive stress test, while 9 will have a false negative test. Equally as important, we see that 100 patients without CAD will have a false positive stress test. Over two-thirds of patients a with a positive stress test do not have CAD in this example. These 100 patients will likely be subjected to further testing, many undergoing invasive coronary angiography as a result. If we instead take a group of patients with a 2% prevalence of disease, we see that the false-positive risk increases, and potential for harm (relative to the potential for benefit) increases significantly (Table 2).
Studies have demonstrated this high false positive rate in both “low risk” and “intermediate risk” patient populations. This high risk of false positive stress testing is underscored by its recent inclusion as part of the American College of Cardiology contribution to the Choosing Wisely campaign. While these recommendations were targeted at screening in asymptomatic patients rather than symptomatic ED patients, the same risks of “invasive procedures” and “excess radiation exposure” applies to any population at significantly low risk. It is therefore imperative to identify ED patients at sufficiently low risk of ACS such that further cardiac imaging is unnecessary. The use of clinical decision rules (CDRs) and accelerated diagnostic protocols (ADPs) may help solve this crisis.
The ASPECT trial looked at an ADP that consisted of a thrombolysis in myocardial infarction (TIMI) score of 0 and normal cardiac biomarkers (troponin and CK-MB) at presentation and 2 hours later. This ADP identified a population with a 0.9% risk of a major adverse cardiovascular event (MACE) at 30 days, with a sensitivity of 99.3% and a negative predictive value (NPV) of 99.1%. There were 3 patients in the study with a false negative ADP, 2 of whom required coronary stenting and one of whom underwent radiofrequency ablation for a 30-minute episode of stable ventricular tachycardia; none died or had serious morbidity. A follow-up study (Aldous 2012) looked at several CDRs and ADPs in patients from one institution in the ASPECT trial. Three ADPs were found to be superior, including the 2-hour TIMI (sensitivity 99.2%, NPV 98.1%), an ADP reported by Hess et al (sensitivity 99.7%, NPV 98.9%), and an ADP reported by Christensen et al (sensitivity 99.4%, NPV 93.8%). Of these, the 2-hour TIMI had the highest specificity (23.8%) while the ADP by Christensen had the lowest (4%), and would likely have limited clinical utility as a result.
An evaluation of the HEART score from Wake Forest Baptist Medical Centre (Mahler 2011) demonstrated a high NPV (99.4%). It’s poor sensitivity (58.3%) was due to the inclusion of only low-risk patients in the cohort (those with a TIMI score < 2 and low clinical suspicion of ACS). Combination of the HEART score with a negative 4 to 6-hour troponin increased both the sensitivity and NPV to 100%. Unfortunately, the study protocol required the use of two CDRs (TIMI and HEART) for inclusion, making application to everyday practice difficult. Poor methodology also makes the study’s results unreliable.
Beyond ADPs and CDRs, Jeff Kline has looked at the use of attribute matching to provide pre-test probabilities for patients at risk for ACS. A recent randomized controlled trial assessed the impact of providing these pre-test probabilities to patients and clinicians in the ED. While this practice reduced the rate of negative testing associated with significant radiation exposure (> 5 mSv), this effect was not the result of reduced provocative testing, but rather a shift from radiologic testing to non-radiologic testing in the intervention group. It is unclear why attribute matching would cause such a shift in the type of tests ordered by the clinician. Further benefit should be shown prior to widespread implementation of the tool in clinical practice, especially given its proprietary nature and the associated cost.
Consensus:
While most present agreed that reducing stress testing in patients at very low risk of adverse outcomes would likely benefit both patients and society, the quantification of what constituted significantly low risk was variable, ranging from 0% up to 5%. Medicolegal risk was cited as the primary concern when discharging low-risk patients without provocative testing. Ongoing research into shared decision making may help remove some of these barriers, decrease healthcare utilization, reduce the risk of false-positive stress testing, and improve patient care.
Intranasal Naloxone in Acute Opiate Overdose
Aug 01, 2013
Journal Club Podcast #4: July 2013
Emergency Physician and Toxicologist Evan Schwarz sits down with me to discuss intranasal naloxone administration...
It’s a typical busy shift in TCC. Your attending is busy dealing with a 75 year old with transient hypotension and syncope that was upgraded to a level 1 trauma due to his forehead abrasion when a car pulls up to the trauma bay, dumps out a 24 year old male who is apneic, and drives away. As you get him into a room, you notice the pinpoint pupils, track marks covering all his extremities, and the burned spoon that falls out of his pocket. You order naloxone but unfortunately no one is able to insert a peripheral IV.
Using your expert ultrasound skills, you place a 22 gauge IV in his thumb and revive him. You spot your attending as you leave the room and let him know about your nice save. Instead of the “great job!” that you are expecting, he just asks “well why didn’t you use a different route to administer the narcan?” Feeling discouraged, you make your way back to EM1 and spot Dr. Halcomb and Dr. Mullins in the hall. You go to ask them their opinion about giving narcan by a different route but notice that they are having a lively conversation about the Krebs cycle. Deciding you don’t want any part of that, you go back to Dr. Cohn’s office to do your own literature search.
PICO Question:
Population: Adult patients with acute opiate overdose with respiratory depression and/or altered mental status
Intervention: Intranasal naloxone
Comparison: Intravenous, intramuscular, or subcutaneous naloxone
Outcome: Time from first contact or drug administration to significantly increased respiratory rate or improved mentation, need for rescue dose of naloxone, healthcare cost, duration of emergency department (ED) incidence of needlestick injury by providers.
Search Strategy:
After quickly searching Google and pulling up a variety of entertaining pictures about heroin intoxication, you decide that Pubmed might be more useful. You enter a number of search terms including “narcan,” “naloxone,” “opiate,” “heroin,” “intranasal,” “intramuscular,” and “subcutaneous.” You eventually simplify your search, entering “intranasal naloxone,” (http://tinyurl.com/nztrfdb)resulting in 46 articles. You scan these and come up with the following 4 articles.
Bottom Line:
Opiate overdose remains a significant health problem in the US and elsewhere, with increasing overall rates of overdose and death related to overdose observed (Hall 1999, Preti 2002, Shah 2008, Bryant 2004). Naloxone remains the second most commonly administered antidote in the US (Wiegand 2012). While naloxone is typically administered either by intravenous (IV) or intramuscular (IM) injection, other potential routes of administration have been proposed, including nebulization (Baumann 2013, Tataris 2013), endotracheal injection (Tandberg 1982), and subcutaneous (SQ) injection (Wanger 1998). High rates of needlestick injuries have led some to propose avoiding IV, IM, and SQ naloxone administration, with many favoring intranasal (IN) administration as first-line treatment in the management of opiate overdose.
In addition to reducing the risk of percutaneous needlesticks among healthcare workers, IN naloxone administration has other potential benefits over other routes. Theoretically, naloxone should have 100% bioavailability through the nasal mucosa with a similar onset of action and similar half-life to IV administration. While rat studies have demonstrated such pharmacodynamics, a single human volunteer study showed poor bioavailability (4% vs. 35%) for the IN vs. the IV route (Dowling 2008); however, this study used a more dilute solution of naloxone than typically recommended (2 mg in 5 mL) and did not address the impact of IN naloxone in actual opiate overdose.
Two retrospective chart reviews were identified assessing the utility of IN versus IV naloxone in the prehospital setting. The study by Robertson demonstrated similar rates of clinical response (66% and 56% in the IN and IV groups, respectively) and similar mean time from patient contact to clinical response (20.3 vs. 20.7 minutes in the IN and IV groups, respectively). The study by Merlin demonstrated similar median increases in respiratory rate (RR) (4 vs. 6 breaths/minute) and Glasgow Coma Scale (GCS) (3 vs. 4) between the IN and IV groups.
Unfortunately, these studies suffered from several major methodological flaws. These were both retrospective chart reviews using prehospital records. Neither study reported on appropriate chart review methods, including abstractor training, use of standardized abstraction forms, blinding of abstractors to the study hypothesis, or assessment of interrater reliability. Additionally, there was no standardized or independent method used to assess outcomes such as respiratory rate, time elapsed, or the need for additional doses of naloxone, and no interrater reliability was assessed for these measurements. The study by Merlin assessed changes in RR and GCS over the course of the EMS encounter, but did not assess the duration of time over which these changes occurred. While the study results suggest that IN naloxone is as effective as IV naloxone, the poor methodological quality of the studies brings this conclusion into question.
Two randomized controlled trials were also identified, assessing IN versus IM naloxone in acute opiate overdose in the state of Victoria, Australia. The two studies were conducted by the same group, 4 years apart. In the first study, the IM group required less time from naloxone administration to achieve a respiratory rate of 10 breaths per minute than the IN group (mean 6 minutes vs. 8 minutes, p = 0.006), and was more likely to achieve a respiratory rate of 10 or more by 8 minutes after naloxone administration (82% vs. 63%, p = 0.0163). The IN group did have a lower rate of agitation/irritation (2% vs. 13%), and there was no statistically significant difference in the rates of minor adverse events. There was no difference in the need for rescue naloxone. The second study demonstrated similar rates of adequate response within 10 minutes of initial naloxone administration between the IN group and IM group (72.3% vs. 77.5%), as well as similar mean response times (8.0 minutes vs. 7.9 minutes). More patients in the IN group required rescue naloxone (18.1%) compared to the IM group (4.5%). There was no difference between the two groups with respect to minor adverse events, hospitalization rates, agitation/violence, nausea/vomiting, or headache.
While these studies were randomized, the providers, patients, and outcome assessors were not blinded to treatment group; no sham or placebo treatments were given. As in the previous two studies, there was no standardized, objective, independent means of measuring time or respiratory rate, or of determining the need for rescue naloxone. The difference in outcomes between these two studies was addressed by the authors, who conceded that the first study used a diluted solution of naloxone in the IN group, administering 5 mL via atomizer instead of the standard 1 mL in order to deliver 2 mg of naloxone (Wolfe 2004). As previously stated, this may lead to decreased bioavailability, likely due to the inability of the nasal mucosa to absorb such a large volume leading to a significant amount of medication being swallowed.
Consensus
Given the similar rates of minor adverse events in these four studies, and the low rate of major adverse events (one seizure in a patient receiving IM naloxone in the Kerr study), all three routes seem reasonably safe. Despite all of the authors' reported concerns over needlestick injury in healthcare workers administering IV and IM naloxone, there were no reported needlestick injuries in any of the studies. The current evidence suggests that IN naloxone is both safe and effective, though may require a rescue dose of naloxone be administered in many cases. The consensus among our group was that few would use IN naloxone as first-line therapy for opiate overdose, but that it would be viable alternative in certain circumstances (e.g. patients with difficult IV access, patients requiring extraction from vehicles).
In addition to healthcare provider administered IN naloxone, peer distribution of IN naloxone has been studies in the US (Doe-Simkins 2009). A survey of Australian heron users revealed strong support for the practice, and some groups have pushed for further research in this area (Lenton 2000, Lenton 2009) as well as the UK (Wright 2006). Potential problems with peer distribution include the medico-legal risk of prescribing medications that will most likely be administered to people other than those for whom they were prescribed, the cost of distributing a sufficient supply of naloxone to have significant impact on overdose morbidity and mortality, and failure of bystanders to contact EMS after administering naloxone, potentially leading to increased morbidity and mortality in cases of naloxone failure or overdose recrudescence (Darke 1997). Further research to address these concerns will likely be necessary before widespread implementation.
Cardiac Ultrasound in Cardiac Arrest
Jul 08, 2013
Journal Club Podcast #3
Ultrasound expert Dan Theodoro and I discuss the use of cardiac ultrasound in predicting outcomes in cardiac arrest...
A shift in TCC yields a now all too familiar dilemma. At peak hours you must manage a patient who has sepsis and requires a central line to follow SCVO2. Your other patient is an asthmatic who, after overstaying the allotted 23 hours in your observation unit by 4 hours, can’t speak in full sentences despite a tripod stance and BiPAP. A level 1 hypotensive trauma patient with a pelvic fracture awaits VIR. The nurses hand you an EKG and you shriek because, without even looking down, the machine is reading ACUTE MI and this time, it’s right. As you groan thinking it can’t get worse than this, it does... EMS brings in a 45 year old healthy appearing patient in full cardiac arrest, down an unknown number of minutes on the scene. It’s unclear, but EMS says bystanders were there without an AED but providing compressions for most of time your patient was down. Just to make it worse, EMS is walking in your patient’s 5 and 7 year old children to the social work office while their parent goes into TCC-1, Left.
“You don’t have time for this,” whispers a little voice in your head that on this occasion might be right. As the nurses help move the patient and place monitor leads on the chest, you confirm ETT placement and find out EMS delivered 3 rounds of epinephrine without good effect. You ask to hold compressions but feel no pulse despite narrow complexes going across the monitor at a rate of about 80. Compressions continue, you perform bilateral chest decompression, administer another round of epinephrine, calcium gluconate, and bicarb, and wait 2 minutes while compressions continue. Compressions are held but you feel no pulse (other than your heart pounding as the nurse tells you the AMI looks grey and your septic patient is hypotensive); the monitor again shows PEA. The team, looking desperate, wants you to “call it.” You pull out your ultrasound hoping that might show you a cardiac effusion you can drain and story you can tell for the ages but your patient is not so lucky. All you see is standstill. Total resuscitation time gone by: 19 minutes. You ask yourself, “Can I call it? Do I have enough information to tell those kids their parent is gone and there’s nothing I can do about it?”
Time stops! Chris Carpenter bounds in, computer in hand, followed by Brian Cohn. They both have on ninja outfits. Chris’ is all black with the words “Knowledge” and “Power” embroidered down the sleeves. Brian’s is white with a green belt. Carpenter quickly assesses what your PICO should be but hesitates a second to get it going as his mentor’s words echo in his head, “He who spreads himself too thin ends up with margarine.” Reluctantly (or maybe not so reluctantly), he hands the computer off to Brian Cohn who, trembling in the presence of EBM greatness but newly armed with McDaddy EBM knowledge from McMaster’s EBM course, puts his head down and grunts “Let’s get it on!”
PICO Question:
Population: ED patients who present in cardiac arrest
Intervention: Bedside ultrasound
Comparison: Clinical gestalt, end-tidal CO2
Outcome: Return of spontaneous circulation, survival to hospital admission, survival to hospital discharge, meaningful neurologic recovery Search Strategy:
Out of nowhere Brian inputs (((heart arrest OR cardiopulmonary resuscitation OR cardio-pulmonary resuscitation OR CPR OR advanced cardiac life support OR cardiac arrest OR asystole) AND (echocardiography OR ultrasonography OR echocardi* OR echo OR cardiac echo OR cardiac ultrasound OR cardiac ultrasonography OR TTE OR transthoracic echocardiography OR transthoracic echocardiogram OR trans-thoracic echocardiogram OR trans‐thoracic echocardiography OR ultrasound OR sonogram) AND ((incidence[MeSH:noexp] OR mortality[MeSH] OR follow up studies[MeSH:noexp] OR prognos*[TextWord] OR predict*[TextWord] OR course*[TextWord] or death[TextWord]) OR (predict*[tiab] OR predictive value of tests[mh] OR scor*[tiab] OR observ*[tiab] OR observer variation[mh])))) into pubmed. The search yields several articles from which you pick these 4...
Bottom Line:
Out-of-hospital cardiac arrest remains a leading mechanism of death in the United States, with an estimated incidence of 300,000 events per year (McNally 2011). Overall survival has remained stable at approximately 8% since the 1950s (Sasson 2010) despite initiatives to improve survival rates (improved bystander CPR, public use of automated external defibrillators). Given the very low likelihood of survival in patients presenting to the ED without a pulse (0.9% in one large database), efforts have been made to determine those in whom ongoing resuscitation is futile.
While some recommend bedside cardiac ultrasound as a diagnostic tool in cardiac arrest to aid in detection of reversible conditions such as severe hypovolemia, pericardial tamponade, and massive pulmonary embolism (Hernandez 2008), there has also been widespread use to evaluate for the absence of cardiac activity, with the results frequently affecting the duration of resuscitation efforts. In one survey of graduates from the LA County/USC Medical Center residency program, 68% reported using ultrasound during cardiac arrest, and 91% of these reported using the results in deciding when to terminate resuscitation efforts (Schoenberger 2007). It is important, however, to review the evidence surrounding this practice.
We identified three primary studies (Blavais 2001, Salen 2001, Aichinger 2012) and one meta-analysis (Blyth 2012) dealing with this subject. In patients in cardiac arrest with no cardiac activity on ultrasound, all of the papers revealed low rates of survival to hospital admission, ranging from 0% to 3.1%. While no clearly identified survival threshold exists above which resuscitation should be continued, given the severity of the outcomes a low threshold should be used. The pooled survival to hospital admission rate in the meta-analysis of 2.4% (95% CI 1.3-4.5%) would indicate that cardiac ultrasound should not be used alone to to determine when further attempts at resuscitation are futile. Unfortunately, the meta-analysis and two of the studies did not assess survival beyond admission or neurologically intact survival. The one study to assess survival to hospital discharge (Aichinger 2012) found that none of the 31 patients with cardiac standstill survived to discharge, however the width of the 95% confidence interval (0%-11%) prevents us from safely applying these results to clinical practice.
There was consensus among those present that these studies do not support stopping resuscitation efforts in cardiac arrest patients based on the absence of cardiac activity on ultrasound. This decision was based on both the size of the studies (and resulting width of the confidence intervals), and the lack of assessment of long-term outcomes (beyond hospital admission/discharge). Many felt that future studies should exclude patients in ventricular fibrillation or ventricular tachycardia, as standard care would dictate ongoing resuscitation in these patients. Future studies will also need to beware the potential adverse consequences of pausing CPR for ultrasound performance, making sure to time this with pulse checks.
One of the issues in these studies was the use of relatively short-term outcomes. The meta-analysis and two of the primary studies evaluated survival to hospital admission as the primary outcome. More clinically important outcomes may include survival to hospital discharge, 30-day or one-year survival, and neurologically intact survival assessed by modified Rankin Scale or CPC score. A conference of the Research Working Group of the American Heart Association Emergency Cardiovascular Care Committee to discuss appropriate outcomes in resuscitation research demonstrated the difficulties in such studies. There was no consensus on a single appropriate outcome, and conference participants were unable to agree on the ideal outcome measure when confronted with 4 hypothetical cases. There was consensus that large trials designed to have a major impact should use longer-term endpoints at least 90 days out coupled with some neurological and quality-of-life assessment.
There are currently no registered trials addressing long-term outcomes of cardiac arrest patients with cardiac standstill on ultrasound. The Reason 1 trial is currently recruiting patients to assess survival to hospital discharge, and the investigators plan to enroll 1000 patients. This would make it the largest study to date on this subject. Based on the expected results of this trial, the investigators plan on gaining funding to conduct a similar trial that will look at long-term outcomes in these patients.
The INTERACT2 Trial- Blood Pressure Reduction in Spontaneous Intracerebral Hemorrhage
Jun 24, 2013
Patients with hemorrhagic stroke, or intracerebral hemorrhage (ICH), are common in emergency medicine and frequently present to the emergency department with significantly elevated blood pressures requiring urgent treatment. The correct goal blood pressure is debatable. On one hand there is concern that the hydrostatic effects of elevated blood pressure could contribute to expansion of the hematoma, increased edema, or rebleeding. On the other hand, overcorrection of the blood pressure could lead to decreased cerebral perfusion in the setting of autoregulation.
Currently, the AHA/American Stroke Association recommendations for the management of blood pressure in spontaneous (atraumatic) ICH are as follows:
1) If SBP is > 200 mmHg or MAP is >150 mmHg, then consider aggressive reduction of BP with continuous intravenous infusion.
2) If SBP is >180 mmHg or MAP is >130mm Hg and there is the possibility of elevated ICP, then consider monitoring ICP and reducing BP using intermittent or continuous intravenous medications while maintaining a cerebral perfusion pressure >60 mmHg.
3) If SBP is >180 mmHg or MAP is >130 mmHg and there is no evidence of elevated ICP, then consider a modest reduction of BP (e.g. MAP of 110 mmHg or target BP of 160/90 mmHg) using intermittent or continuous intravenous medications to control BP.
4) In patients presenting with a systolic BP of 150 to 220 mmHg, acute lowering of systolic BP to 140 mm Hg is probably safe
Note that these recommendations are Class C, based partly on the INTERACT pilot study, which looked at 440 patients and compared goal SBP of 140 mmHG with more modest reductions to 180 mmHG. They found a trend toward reduced hematoma growth with no increase in neurologic deterioration or differences in other clinical outcomes (disability, quality of life). The study was underpowered to detect significant differences in any of the outcomes.
Methods:
INTERACT2 was an international, multicenter, prospective, randomized trial. The study was open-treatment by necessity; patients and providers could not be blinded to group allocation due to the nature of the study. They were blinded to the end-point of the trial, and outcome assessors were blinded to group allocation. Patients were excluded if they had a definite contraindication to BP-lowering treatment, if treatment could not be initiated within 6 hours of ICH, if there was a structural cause for the ICH, if they had a GCS of 3-5 (“deep coma”), if they had a massive hematoma with poor prognosis, or if early surgical evacuation of the hematoma was planned.
Patients were randomized to two groups: 1) in the intensive-treatment group, IV and oral antihypertensive were initiated by protocols based on the local availability of agents, with the goal of achieving a SBP < 140 mmHg within one hour of randomization and maintaining this level for 7 days; 2) in the standard-treatment group, BP-lowering treatment was initiated if the SBP was > 180 mmHg, with no lower level stipulated. All patients were started on oral antihypertensives within 7 days or hospital discharge. Patients were followed-up in person of by telephone at 28 days and 90 days by local staff blinded to treatment group. Patients were analyzed by the intention to treat principle.
The primary outcome was the proportion of patients with a poor outcome: death or severe disability (a score of 3-5 on the modified Rankin scale at 90 days after randomization). Initially, the key secondary outcome was death or severe disability in patients in whom treatment was initiated within 4 hours of onset of ICH. This was changed prior to data analysis to physical function across all 7 levels of the modified Rankin scale using ordinal analysis. Other secondary outcomes included: all-cause mortality; cause-specific mortality; health related quality of life; duration of hospitalization; residence at a residential care facility at 90 days; poor outcomes at 7 and 28 days; and serious adverse events (neurologic deterioration or severe hypotension). They also assessed change in hematoma size in a subset of patients who underwent repeat CT or MRI at 24 hours post-ICH.
Between October 2008 and August 2012, 2839 patients were enrolled at 144 hospitals in 21 countries; 1403 were assigned to intensive-treatment and 1436 were assigned to standard treatment. Patients were similar with respect to known risk factors…with the exception of warfarin use (3.6% in the intensive-treatment group, 2.2% in the standard-treatment group, p = 0.025). Approximately 68% of all patients were recruited from Chinese hospitals. In terms of treatment, patients in the intensive-treatment group received antihypertensives more quickly and had more rapid lowering of BP than those in the standard-treatment group. Of note, patients in the intensive-treatment group were more likely to have care withdrawn than those in the standard-treatment group (5.4% vs. 3.3%, p = 0.005). The primary outcome was assessed in 98.5% of intensive-treatment patients and 98.3% of standard-treatment patients.
Results:
For the primary outcome, poor outcome at 90 days (death or score of 3-5 on the modified Rankin scale), there was no difference between the intensive-treatment (52%) and standard-treatment (55.6%) groups (OR 0.87; 95% CI 0.75-1.01). Ordinal analysis showed a favorable shift in the distribution of scores on the modified Rankin scale for patients in the intensive-treatment group (pooled OR for a shift to higher score of 0.87, 95% CI 0.77-1.00). The rate of death from any cause was similar for the intensive-treatment and standard treatment group (11.9% vs. 12.0% (OR 0.99; 95% CI 0.79-1.25) as was the percentage of deaths directly attributed to the ICH, duration of hospitalization, and rates of serious adverse events.
Patients in the intensive-treatment group reported better quality of life at 90 days using the European QOL 5-dimension questionnaire which assesses mobility, self-care, usual activities, pain or discomfort, and anxiety or depression (each graded on a 3-level scale as either no problems, moderate problems, or extreme problems) than those in the standard-treatment group: mean utility score 0.6 vs. 0.55. p = 0.002.
For patients who underwent repeat brain imaging at 24 hours (35.1% of the intensive-treatment group and 33.1% of the standard-treatment group) there was no significant difference in the mean hematoma growth: relative difference 4.5% (95% CI -3.1% to 12.7%), absolute difference 1.4 mL (95% CI -0.6 mL to 3.4 mL).
Commentary:
The authors note that while intensive lowering of BP in ICH did not result in a significant reduction in the primary outcome, there were better functional outcomes when an ordinal analysis of the primary outcome was undertaken. The ordinal analysis showed a very slight benefit with intensive lowering of BP, with a 95% CI that just reaches 1.00 (and a p-value of 0.04). Keep in mind that the authors chose to perform ordinal analysis after the data had been collected. One should always beware when investigators change the study in some way after data collection has been initiated. In this case, the authors’ reasoning was that ordinal analysis began gaining acceptance in clinical trials partway through the course of the trial. Also keep in mind that this is the secondary outcome, not the primary outcome, which was selected a priori. For the other secondary outcome of quality of life, there was a slight improvement with intensive lowering of BP with a mean EQ-5D score of 0.6 vs. 0.55. The clinical significance of this difference is unfortunately difficult to interpret. If my overall score is 0.6, how much better off am I than someone with a score of 0.55?
Issues of external validity have been raised as well, given that the majority of patients in the study (68%) were recruited from Chinese institutions. We have to ask two questions: 1) are Chinese patients with ICH somehow different than patients in our own institutions? and 2) is the treatment of ICH in Chinese institutions different than treatment at our institutions? Given that the majority of patients with spontaneous ICH are treated with supportive care, and that the indications for invasive procedures (ICP monitoring, craniotomy/craniectomy) are likely the same at any large hospital with neurosurgical capability, I would suspect that these results are likely valid in the US. Additional studies at US institutions may help elucidate this further.
Bottom Line:
The authors conclude: “early intensive lowering or BP in this patient population is safe,” which is in keeping with the AHA/ASA guidelines, which state “In patients presenting with a systolic BP of 150 to 220 mmHg, acute lowering of systolic BP to 140 mm Hg is probably safe.” I tend to agree with these statements. This study alone should by no means make intensive lowering of BP in ICH standard of care, as there is no clear benefit.
EBM Teaching Point:
The authors in this study used an ordinal analysis of the modified Rankin score. The Rankin score is an example of a type of categorical data known as ordinal data, in which numerical scores are arbitrarily assigned to levels of measurement. The assigned number itself has no meaning, other than to provide a scale for reference. Examples include pain scores (0 out of 10 means no pain, 10 out 10 is severe pain). Classically, studies that look at outcomes based on ordinal data have applied a cutoff, thus turning a scale with multiple levels into a binary outcome. Instead of worrying about whether you’re a 1, 2, 3, etc., we only care about whether you’re above or below the cutoff. We can then compare the proportions above or below the cutoff in 2 groups and assign p-values or chi-square values. Ordinal analysis on the other hand uses more complex statistical calculations to assess shifts across the entire spectrum of the scale. This has been shown to have higher statistical power than using binary outcomes alone.
Asymptomatic Hypertension in the Emergency Department
Jun 12, 2013
Journal Club Podcast #2
My good friend Greg Polites and I sit down and discuss the evaluation and management of patients with asymptomatic
severely elevated hypertension in the emergency department...
Vignette:
You arrive in TCC for your 3p-11p shift and are taking sign out from Dr. Bavolek after one of her typical trauma shifts. After receiving report on the 4th level 1 trauma, the stroke patient receiving tPA and awaiting an NNICU bed, the “wound check” with turned out to be nec fasc, and “I’ve had abdominal pain for 6 months, I’m here for a third opinion,” you see that EM4 has a green zoom icon. “Oh, don’t worry about him,” she says, “that dental pain is going home.” On the way to see your first patient that was paged out as “unknown male found down, agonal respirations, CPR in progress TCC6. TY Cindi” you are stopped by one of our new nurses. “Hey, are you done with sign out? Did Dr. Bavolek leave already? I was just discharging EM4 but his blood pressure is 215/111, do you want to do anything about that?” In between shocks, CPR, and intubating the new patient, you manage to review EM4s chart. He’s a 51 year old African American gentleman who came in with left lower dental pain. He has no past medical history but has not sought medical care in years. After exam showed no abscess you see he received an inferior alveolar block, discharged on vicodin, penicillin, and given both a dental referral and IHN follow up. While placing an ICY line with your new electronic manometer you think to yourself, “huh, that’s kind of high, what DO I do? No, better yet, what would Chris Carpenter do? WWCCD… WWCCD…” As you’re waiting for radiology to pull up your chest X-ray confirming tube placement, you open PubMed at the computer between TCC4 and TCC5 and this is what you find… PICO Question:
Population: Patients presenting to the emergency department (ED) with severely elevated blood pressure and no signs or symptoms concerning for end-organ damage Intervention: Laboratory testing, electrocardiogram (ECG), chest x-ray, or rapid reduction in blood pressure Comparison: Outpatient referral for evaluation and initiation of antihypertensive therapy Outcome: Stroke, MI, renal failure, dialysis, death. Search Strategy:
In PubMed, the search terms “asymptomatic hypertension” and “emergency department” were entered (http://tinyurl.com/d44c5vq) resulting in 83 articles, from which 3 relevant articles are selected. A search of the ACEP clinical policies for hypertension yields the final article. Bottom Line:
Elevated BP remains a common finding in patients presenting to the emergency department (Karras 2005). While the detrimental effects of long-standing untreated hypertension have been well-documented with respect to the risk of stroke, myocardial infarction, and chronic kidney disease, the risks of untreated hypertension in the short-term have not been proven. For patients with signs or symptoms of end-organ damage, the course of action for Emegency Physicians is fairly straightforward, with work-up and treatment recommended. The dilemna lies with those patients presenting with asymptomatic elevated BP.
Currently, the ED management of patients with severe elevated blood is highly variable, with differences in both treatment (intravenous and oral anti-hypertensives to effect an immediate reduction in BP, inititiation or modification of outpatient antihypertensive regimen, discharge for follow-up with no antihypertensive medications) and evaluation (extensive ED testing. Options for treatment include intravenous and oral anti-hypertensives to effect an immediate reduction in BP, inititiation or modification of outpatient antihypertensive regimen, or discharge for follow-up with no antihypertensive medications.
Two articles were identified which addressed the utility of ED testing in asymptomatic hypertensive patients (Karras 2008, Nishijima 2010). Typically, testing involves evaluation of a urinalysis and basic metabolic profile to assess for renal injury, an electrocardiogram to assess for cardiac ischemia, and a complete blood count to evaluate for anemia. In these two studies, 6%-7.2% of patients had abnormal results that resulted in a change in management. Both of these were multicenter studies assessed at urban EDs, and therefore may not be externally valid to community EDs, managed healthcare institutions, or countries with national healthcare systems. As the majority of management changes involved hospital admission, the lack of primary care follow-up in urban EDs, with largely uninsured and Medicare/Medicaid patients (Asplin 2005, Owens 2009), may have artifically inflated the proportion of patients whose management was altered. Insurance status has been shown to influence admission rates in other disease processes, including TIA (Chaudhry 2013) and venous thromboembolism (Misky 2011). Additionally, these two studies on asymptomatic hypertension did not assess the long-term impact of ED testing on patient-important outcomes (stroke, MI, death, renal failure).
Only one article addressing the effect of rapid reduction of BP in patients with asymptomatic serevely elevation hypertension in the ED (Zeller 1989). This study demonstrated similar reductions in BP one week after ED presentation among patients discharged on an outpatient antihypertensive regimen in all 3 treatment groups: 1) patients given 0.1 mg of clonidine in the ED every hour for up to 5 hours until sufficient BP reduction was achieved; 2) patients given placebo every hour for up to 5 hours; 3) patients discharged from the ED immediately. While this study assessed a surrogate outcome (mean change in BP), it seems unlikely that short-term elevations in BP (less than one week) would cause a significant increased in the risk of more important outcomes previously described.
The American College of Emergency Physicians has a clinical policy addressing the evaluation and management of patients with asymptomatic hypertension. Unfortunately, this policy is based on limited available evidence, and no level A recommendations were made. With regards to the accuracy and reliability of BP readings in the ED in asymptomatic patients, 2 recommendations were made
1) A level B was made that patients with persistently elevated BP (sytolic BP > 140 mmHg, diastolic BP > 90 mmHg) should be referred for follow-up
2) A level C recommendation was made that patients with a single elevated BP may need further screening as outpatients.
With regards to the rapid lowering of BP in asymptomatic patients, 3 level B recommendations were made:
1) Initiation of treatment is not necessary in patients with follow-up.
2) Rapid lowering of BP is unnecessary and potentially harmful.
3) Management, when initiated, should attempt to gradually lower the BP, and should not be expected to normalize the BP during the ED stay.
This clinical policy was limited, not only by the lack of available evidence, but also in its failure to address patient values and preferences; the policy was based purely on physician and nursing input. Additionally, policies and guidelines are often difficult to interpret due to lack of a standardized system for grading the evidence and the strength of the recommendations. The GRADE working group (Grading of Recommendations Assessment, Development, and Evaluation) has been working to address these shortcomings and standardize grading across multiple organizations. This system recommends assigning a “strength of recommendation” based upon the uncertainty associated with risks and benefits of an intervention, rather than physician preference and gestalt.
The current literature supports a focused laboratory assessment of patients with asymptomatic hypertension in the ED, particular among patients with poor follow-up. It seems reasonable to initiate outpatient antihypertensive management in patients with persistently elevated BP in the ED, again particularly in those in whom early follow-up is not guaranteed. The rapid lowering of BP in the ED in patients without symptoms of end-organ damage does not seem to provide any benefit with respect to BP measurement at one week, and likely provides no decrease in the risk of patient-important outcomes. Additionally, this practice may actually lead to increased short-term risk, and should be avoided.
Synovial Lactate in the Diagnosis of Septic Arthritis
Jun 01, 2013
Journal Club Podcast #1
This month, I sit down with Chris Carpenter, evidence-based medicine guru, and talk about the diagnostic accuracy of synovial lactate in septic arthritis...
Article 1: Evidence Based Diagnostics: Adult Septic Arthritis, Acad Emerg Med 2011; 18(8):781-796. (http://pmid.us/21843213) Answer Key
Article 2: D-lactic acid in synovial fluid. A rapid diagnostic test for bacterial synovitis, J Rheumatol 1995; 22: 1504-1508. (http://pmid.us/7473474) Answer Key
Article 3: Synovial fluid lactic acid: A diagnostic aid in septic arthritis, Arthritis Rheum, 1978; 21(7):774-779. (http://pmid.us/697948) Answer Key
Article 4: Synovial fluid lactic acid in septic arthritis, N Z Med J 1981; 93(678): 115-117. (http://pmid.us/6943453) Answer Key
Vignette:
While working another crowded EM2 shift, you evaluate a 40-year old construction worker with a painful and swollen right knee. He cannot recall any injury to the knee and has no significant past medical history, including no prior crystalloid arthropathy (gout), knee surgeries, or endovascular infections (endocarditis). The knee hurt yesterday, but is exquisitely painful today with a palpable effusion and no notable overlying erythema or warmth to the touch. You note no surgical scars or overlying abrasions. He does not take any daily medications and cannot recall his last primary care physician evaluation.
His vitals are BP 140/80, P 60, RR 18, T 36.8°C, and pulse ox 100%. He is lean and muscular without any other remarkable findings on physical exam, including no genitourinary complaints. Passive or active range of motion of his right knee is extremely uncomfortable for him.
You promptly order morphine analgesia for this patient and then contemplate the differential diagnosis of a unilateral swollen and painful knee. In the absence of traumatic injury, you doubt a hemearthrosis, although he is a construction worker so you cannot exclude an occult twisting injury or overuse syndrome. However, the two primary diagnoses you consider most pertinent to rule-in or rule-out today are crystalloid arthropathies or septic arthritis. Recent debates about the American College of Emergency Physician’s decision not to participate in the Choosing Wisely campaign linger in your mind as you ponder what resources exist to “choose wisely” (pros and cons) in ED diagnostic decision making (see http://tinyurl.com/bs2vjj7). You speculate on the diagnostic accuracy of serum tests (CBC, ESR, CRP) and ponder the risk/benefits of arthrocentesis.
After reviewing Roberts and Hedges’ Clinical Procedures in Emergency Medicine 5th Edition chapter describing the methods to perform and interpret knee joint aspirations (page 980-983), you discuss the procedure with the patient. After obtaining informed consent, you prep the patient and perform a time-out before administering 7cc of 1% lidocaine. The subsequent joint aspirate, obtained without significant patient discomfort or other adverse sequelae, is a cloudy yellow hue. You send the fluid to the lab and request a synovial lactate because you read somewhere that is a valuable test to evaluate for septic arthritis. The lab calls you back 30-minutes later telling you that they have no protocol for synovial lactate and cannot run the specimen for lactate. You turn to the medical literature to explore this diagnosis further.
PICO Question:
PICO Question
Population: Adult patients with suspected septic arthritis
EBM has taught you that well-done meta-analyses can be high-yield products for busy clinicians. Therefore, you turn to PUBMED to conduct a diagnostic study Clinical Query using the search term “septic arthritis” and select the systematic reviews (108 citations – see http://tinyurl.com/9822z3r). Among the first 10 citations are two high-quality reviews, but one of the reviews derives from the other so you choose the meta-analysis on this topic which yields the remainder of the manuscripts that you review.
Bottom Line:
The differential diagnosis for acute monoarticular arthritis presenting to the emergency department is broad and includes infections (bacterial, fungal, mycobacterial, viral), crystalloid arthropathy (gout, pseudogout), rheumatoid arthritis, and trauma. Based upon the sole emergency medicine-centric systematic review available on this topic, the best estimate pre-test probability for septic arthritis in the emergency department is 27%. This means that among all emergency department patients in whom the clinician believes that septic arthritis is sufficiently likely to merit an arthrocentesis, 27 in 100 will actually have non-gonococcal septic arthritis. Most experts agree that this is an overestimate, but there is currently no better estimate of pre-test probability to facilitate Bayesian reasoning.
Clinicians are able to deduce the etiology of acute nontraumatic joint pain/swelling within 3-days in most cases, but in an era of “overdiagnosis” and “overtreatment” Emergency physicians lack the luxury of a 3-day admission for most monoarticular arthritis patients. Emergency physicians’ clinical decision making often skews to the “rule out worst case scenario” model, which in the case of an acutely painful/swollen joint includes septic arthritis. About 50% of septic arthritis cases involve the knee, but any synovial space can be infected. Septic arthritis management options include surgical drainage and systemic antibiotics, although needle aspiration has also been evaluated in select cases. Short-term mortality for treated septic arthritis ranges from 3% to 11% (Cooper 1986, Deesomchok 1990, Kaandorp 1995, Gupta 2001).
As is true with the majority of emergency medicine diagnoses (and medicine/surgery diagnoses), there is a paucity of diagnostic research to illuminate best-evidence practices for history, physical exam, clinical gestalt, and lab testing for septic arthritis. In fact, diagnostic studies of history and physical exam to evaluate septic arthritis in any setting are virtually non-existent. We identified two studies assessing historical risk factors and found none that evaluated physical exam. None of the studies (including the lab test studies) adheres to the STARD criteria in design or intent. Therefore, the available evidence may provide skewed estimates of diagnostic accuracy because of various biases, including spectrum bias, double-gold standard bias, and incorporation bias.
Nonetheless, the available evidence for historical risk factors identified just three factors with a likelihood ratio > 3: prosthetic joint with overlying skin infection (LR+ 15), joint surgery within the preceding 3-months (LR+ 6.9), and age > 80 (LR+ 3.5). The absence of risk factors does not significantly reduce the probability of septic arthritis (the range of LR- was 0.64-0.93). To make this diagnosis even more challenging, commonly available serum tests (WBC, ESR, CRP) for septic arthritis are inaccurate and probably worthless acutely (see below). Eliminating these serum tests from the lexicon of reflexive emergency department testing for septic arthritis is an attractive target to “Choose Wisely”(see Bukata essay 1 and essay 2) in reducing unhelpful, potentially expensive laboratory testing. During Journal Club, Orthopedic surgery opined that serial assessment of ESR and CRP is anecdotally helpful for the longitudinal diagnosis and prognosis in the evaluation of suspected septic arthritis. However, nobody could identify any evidence to support this practice. Although serum procalcitonin and tumor necrosis factor (as does PCR testing to identify the specific infecting organism within 3-hours) have promising positive likelihood ratios, these tests are not commonly available in the emergency department.
LR+
LR-
WBC*
1.4-1.7
0.28-0.84
ESR*
1.3-7.0
0.17-2.4
CRP*
1.1-4.5
0.3-0.7
Procalcitonin
5-∞
0.3-0.7
TNF
∞
0.7
IL-6
1.5
0.9
IL-β
3.2
0.8
The suboptimal diagnostic test characteristics for history, physical exam, and serum tests leaves synovial testing as the standard of care diagnostic strategy to evaluate for septic arthritis. The sole systematic review noted that the risk of arthrocentesis includes post-procedural iatrogenic infections with a range of 0.01% in healthy populations to 0.05% in immunocompromised patients. During this Journal Club, Orthopedic surgery noted that:
1. They would prefer to perform the first and only arthrocentesis for suspected post-op joint infections to avoid iatrogenic complications (i.e. arthrocentesis-related infections)
2. Prosthetic joints typically yield less pronounced (lower) synovial leukocytosis than do native joint infections.
So how well does the synovial fluid distinguish non-gonococcal bacterial arthritis from other acute joint problems? Synovial gram stain has sensitivity 29% - 65% with an undefined specificity (Argen 1966, Goldenberg 1976, McCutchan 1990, Faraj 2002, McGillicuddy 2007). A synovial WBC > 100,000 cells/mm3 has an interval LR of infinity, whereas a synovial WBC 0-25,000 has interval LR of 0.33.
Our review of the synovial lactate data in conjunction with Laboratory Medicine and Orthopedic Surgery for this Journal Club suggests that synovial lactate is a promising test for the future ED evaluation of suspected acute septic arthritis. Unfortunately, two of the three studies did not distinguish whether they assessed D- or L-lactate! Bacteria, not humans (with rare exceptions such as short gut syndrome), produce D-lactate. On the other hand, humans, not bacteria, produce L-lactate. Laboratory Medicine believes that D-lactate is biologically plausible as a diagnostic marker of bacterial arthritis, but no commercially available laboratory test currently exists for D-lactate. D-lactate is a 3-day mail out test to Mayo Clinic. On the other hand, Laboratory Medicine hypothesizes that L-lactate is correlated with the synovial white blood cell count with higher sWBC counts producing more L-lactate. This hypothesis has not been tested. In fact, none of the synovial lactate studies evaluated the L- or D-lactate real-time. Instead, specimens were frozen and tested in batches long after the patient care episode was past. Here is the study-by-study synopsis of the available data.
This is the only study to specify which form of lactate they assessed (D-lactate). Interval LR’s range from 0.16 (D-lactate 0-0.05 mmol/L) to 20 (D-lactate >0.15 mmol/L). This test is superior to synovial WBC > 50,000 cells/mm3 (LR+ 9.3, LR- 0.47) or sPMN > 90% (LR+ 2.7, LR- 0.37) and may be particularly useful for partially treated septic arthritis.
At a threshold of 5.5 mmol/L (note 10-fold higher than Gratacós’ study which may indicate that Brook was studying L-lactate) the synovial lactate LR+ is 5.9 and LR- 0.04. More importantly, the interval LR for 0-5.5 mmol/L is 0.06 and for >16.7 mmol/L it is infinity. Unfortunately, the authors do not describe whether they evaluated D-lactate or L-lactate and by failing to adhere to STARD criteria, the authors leave open the possibility for significant biases, most of which skew estimates of sensitivity and specificity upwards.
Based on this case-control, single-center, Rheumatology clinic study, synovial lactate assays (probably L-lactate based on the investigator’s discussion) using the Calbiochem-Behring Rapid Lactate Kit accurately discriminates non-GC septic arthritis from other etiologies of acute monoarticular joint pain/swelling. At a threshold of 10 mmoL/L (which is nearly two-fold the 5.5 mmoL/L threshold proposed by Brook 1978) the LR+ is 22 and the LR- is 0. Using the Brook 5 mmoL/L threshold, Moss demonstrates LR+ 2.7 and LR- 0 (compared with 5.9 and 0.04 for Brook, respectively) for lactate. The interval LR for 0-10 mmol/L is zero versus 17.2 for 10-20 mmoL/L and infinity for synovial lactate >20 mmoL/L.
In summary, Lab Medicine, Ortho, and EM agreed that the available data on synovial lactate is insufficient to change the diagnostic management of these patients. However, all agreed that sufficient clinical equipoise exists to justify additional research. Future studies should assess diagnostic accuracy in a consecutive sampling of ED patients with monoarticular arthritis in whom there is sufficient suspicion of non-GC septic arthritis to perform an arthrocentesis. It will be essential for these future studies to follow the STARD criteria and to report interval LR’s. If these “level 2” diagnostic accuracy studies confirm synovial lactate as a useful adjunct to synovial WBC, the logical progression in research would be to assess the impact of awareness of synovial lactate on clinician decision-making.
Immediate Oral Beta-Blockers in STEMI
Jun 01, 2013
Prior to 2005, standard practice in treatment of STEMI in many emergency departments included giving IV beta-blockers (typically metoprolol 5 mg IV q 5 minutes up to 3 doses, unless contraindicated by hypotension or bradycardia). Patients were then admitted and given oral beta-blockers.
In 2005, the COMMIT trial (aka 2nd Chinese Cardiac Study/CCS-2) was published in Lancet: 45,852 patients were randomized to receive either placebo or metoprolol (15 mg IV in the emergency department followed by 50 mg PO q 6 hours for 48 hours, followed by 200 mg extended release metoprolol daily for 4 weeks). There was no difference between the treatment and control groups for the primary outcome (composite of death, reinfarction, or cardiac arrest) (odds ratio [OR] 0.96, 95% CI 0.90-1.01) or for death alone (OR 0.99, 95% CI 0.92-1.05). However, there was a significant increase in the risk of cardiogenic shock in patients receiving metoprolol (OR 1.30, 95% CI 1.19-1.41).
The result for our emergency department (and many others) was that we stopped giving immediate metoprolol to STEMI patients (though they still received oral metoprolol once admitted). The AHA/ACC STEMI guidelines changed as well, though not so drastically as our practice. The wording in 2004 was that “oral beta-blocker therapy should be administered promptly to those patients without a contraindication,” compared to the current wording: “oral beta blockers should be initiated in the first 24 hours in patients with STEMI.” The guidelines remained unchanged with regards to IV beta-blockers, which they state “it is reasonable to administer…promptly to STEMI patients without contraindications.”
Methods:
This observational study used data collected prospectively in the Austrian Myocardial Infarction Network from June 1, 2006 to December 31, 2010. The use of bisoprolol (2.5 mg orally) was recommended by protocol to be given by the emergency physician either out-of-hospital or in the ED within 30 minutes of confirmation of an acute STEMI. If bisoprolol was not given within 30 minutes, the protocol recommended it be given 24 hours after the first ECG. The decision to give bisoprolol immediately was made by the physician on duty. Contraindications included:
1) Systolic blood pressure (SBP) < 100 mmHg
2) Bradycardia (HR < 60 bpm)
3) 2nd or 3rd degree AV block
4) Clinical signs of heart failure (bilateral rales, cyanosis).
All patient eligible for percutaneous intervention (PCI) also received aspirin, clopidogrel, bivalirudin, and abciximab. Patients receiving rtPA were given aspirin, clopidogrel, and unfractionated heparin.
The primary outcome was all-cause mortality greater than 48 hours after presentation: only patients who survived the first 48 hours were included in the survival analyses. Secondary outcomes included cardiovascular death at 30 days and 1 year. Mortality was also assessed for two predefined subgroups: elderly patients aged > 70 years and patients with ≥ 2 COMMIT shock index criteria (age > 70, symptoms > 12 hours, SBP < 120 mmHg, and HR > 110 bpm). Follow-up data on mortality was obtained from the Statistical Department of the Austrian government that collects data on the cause of death in all patients who die in a hospital in Austria.
Results:
664 patients with STEMI were analyzed: 343 (52%) received immediate β-blocker therapy while 321 (48%) received delayed β-blocker therapy. The average age was 64, with 70% male. The two groups were similar with respect to age, gender, previous medical history, time from onset of pain to first ECG, time from first ECG to PCI, and reperfusion strategy. The delayed β-blocker group had significantly lower initial mean SBP (129 vs. 146 mmHg, p < 0.001), lower initial mean diastolic blood pressure (77 vs. 85 mmHg, p < 0.001), and lower initial mean heart rate (75 vs. 82 bpm, p <0.001).
24 patients died within the first 48 hours (immediate β-blocker group: 5, delayed β-blocker group: 19), all related to the MI. For the remaining 640 patients, survival analysis revealed an overall mortality of 19.2% in the delayed β-blocker group vs. 10.7% in the immediate β-blocker group (p = 0.0022, RR = 0.56, NNT 11.7). Thirty-day and one-year cardiovascular mortality were lower in the immediate β-blocker group than the delayed β-blocker group, as were 30-day and one-year all-cause mortality (Table 1). The use of immediate β-blocker was not associated with an increase in all-cause mortality in either those with an increased risk for cardiogenic shock, or those over 70 years of age.
Table 1. 30-day and 1-year cardiovascular (CV) and all-cause mortality
Delayed βBL
Immediate βBL
P -value
30-day CV mortality
4.34%
0.89%
0.0058
1-year CV mortality
8.14%
1.88%
0.0003
Total CV mortality
13.41%
5.17%
0.0002
30-day all-cause mortality
4.66%
0.89%
0.0033
1-year all-cause mortality
9.79%
3.23%
0.0006
Total all-cause mortality
19.19%
10.73%
0.0022
Commentary:
If we believe this study, we need to give 12 STEMI patients 2.5 mf of oral bisoprolol (equivalent to 25 mg metoprolol) to save one life over the next 4.4 years, or 26 patients to save one life in the next month (these numbers would be even lower if we included deaths in the first 48 hours). Compare this with a NNT of 42 for aspirin (ISIS-2) and 50 for PCI (vs. thrombolysis) (Wang 2009) to prevent one death at one month.
It seems biologically implausible that giving a small dose of an oral blocker 24 hours earlier in STEMI could result in such a large reduction in mortality. This is especially true considering the large mortality difference seen in the first 48 hours (5 in the immediate group vs. 19 in the delayed group). The authors themselves note “The assumption is that patients dying within the first 48 hrs have died independently of immediate or delayed β-blocker treatment, e.g., death occurred prior to a sufficient β-blocker effect,” but go on to attribute the large mortality reduction seen to treatment effect. It would seem more plausible, given the observational non-randomized nature of the study, that a prognostic imbalance existed between the two groups at the beginning of the study that would explain the difference in outcomes (rather than an effect of the treatment), including:
· Reported: the only reported differences were SBP, diastolic blood pressure, and heart rate, which were all lower in the control group. While decreased SBP has been shown to increase mortality in STEMI, lower heart rates have been shown to be protective (Gale 2008).
· Not reported: location of MI (anterior STEMI has been shown to predict increased mortality compare to other locations [Stone 2008])
· Not reported - hx of CHF has been shown to increase mortality (Gale 2008).
· Hidden: the proportion of patients with ≥ 2 shock index criteria was significantly imbalanced: 79 (26.2%) patients in the delayed β-blocker group vs. 54 (16.0%) in the immediate β-blocker group (p = 0.0016). And that’s in patients who survived the first 48 hours!!!
· Hidden: there was in increase in both the incidence of hypotension (17 vs. 5, p = 0.008) and bradycardia (41 vs. 14, p < 0.001) in the delayed treatment group. It seems unlikely that withholding beta-blockers for 24 hours would result in an increase in these complications during that same time period. It is more likely that a prognostic imbalance put the control group at increased risk of both of these.
Bottom Line:
Despite a large reduction in mortality (19.19% vs. 10.73%) with immediate vs. delayed oral beta-blocker administration in STEMI, it seems unlikely that this difference was due to treatment effect, and more plausible that the difference was due to prognostic imbalance. As the authors state, a randomized-controlled trial to assess the efficacy of immediate beta-blockers would be helpful to resolve this issue. Randomization attempts to balance both known and unknown confounders between treatment groups, so that any difference in outcome can be attributed to treatment effect.
EBM Teaching Point:
The authors used a survival analysis, and presented Kaplan-Meier curves of the results for mortality over time. If we take the total all-cause mortality for the delayed beta-blocker group as an example, we see 41 deaths out of 302 participants. If we use these numbers to calculate a percent, we do not arrive at the 19.19% probability of death reported. This is because all of the patients could not be followed for the same duration. Imagine a patient enrolled on day 1 of the study (June 1, 2006), who is followed for approximately 4.5 years (the last point plotted on the Kaplan-Meier curve): we know whether or not that patient died over a 4.5-year period. Now imagine a patient enrolled on the last day of the study (December 31, 2010). If we followed this patient for 4.5 years, we would still be watching for the outcome (and would continue doing so until July of 2015!). Instead, the authors took the follow-up data they had until the point at which they stopped looking at outcomes. Entering this data into statistical software will yield a survival analysis and Kaplan-Meier curve, which is used to compute an estimate of survival probability.