Blog of PEM Research Foundation


Guerrilla academics and Sci-hub

posted Sep 22, 2016, 10:14 PM by Admin PEMRF   [ updated Sep 22, 2016, 10:14 PM ]

Sci-hub allows users to enter the DOI of any article, and in short order the PDF of the article will generally appear on the screen.

Sci-hub (available at Sci-hub.cc) is an indispensable tool for those who attempt pursue any form of academic reading or writing, but do not have ready access to a University library. In fact, Sci-hub works so well that it is also extremely popular among those who do in fact have ready access to University library.

 Sci-hub has attracted a fair bit of attention from large publishers who feel that it is stealing their intellectual property. Given that the price for a reader to download a single article is $30-$45, and given that a writer will at least skim a thousand or more articles while preparing an article, dissertation, thesis or manuscript, the big publishers feel that Sci-hub is depriving them of thousands of dollars of revenue.

The reality, of course, is that the prices set by the publishers bear no relation to any planet inhabited by researchers, particularly the independent sort of researcher that the foundation tends to favor. Researchers simply have to do without, email original authors back and forth, rely on social networks both virtual and real to access the information they need. Even researchers within university settings have to jump through so many hoops to access some of this literature via their own library access that they too have come to rely on Sci-hub.

Sci-hub is necessary because publishers are greedy beyond belief. Arcane access rules for libraries derive from this greed, thereby sending younger academics into the arms of Sci-hub from whence they may never return. Others who attempt to do research outside the embrace of the University have long exchange rings with Sci-hub.

In short Sci-hub is the great equalizer in terms of access to the published literature. As such, Sci-hub deserves support, a tricky thing for a foundation to do but something that individuals can. 


Derivation of Candidate Clinical Decision Rules to Identify Infants at Risk for Central Apnea

posted Dec 5, 2015, 12:57 PM by Admin PEMRF

Sending an infant home with apparently mild bronchiolitis to have him or her return dead shortly thereafter is every parent and physicians nightmare. Such eventualities are rare but they do repeatedly. They occur because central apnea typically occurs early in bronchiolitis when the disease itself is mild and clinicians will be inclined to send the child home.

Some of these rare but tragic deaths could be prevented by applying the clinical rules presented in research which the foundation supported.

This foundation supported research finally saw publication in Pediatrics in November. The article “Derivation of Candidate Clinical Decision Rules to Identify Infants at Risk for Central Apnea” represents the culmination of a decade of work by foundation members. In addition to the eight years spent in recruitment, study design, institution review board approval, analysis, and re-analysis based on the input of audience comments at scientific meetings, and finally writing manuscript took a great deal of time. Ultimately the rules presented are a subset of many possible rules and a possible next step is to perform a futility type study to try and eliminate some of these while preparing the groundwork for multi-center validation derived.

The authors of this study were Paul Walsh, Pádraig Cunningham, Sabrina Merchant, Nicholas Walker, Jacquelyn Heffner, Lucas Shanholtzer, Stephen J. Rothenberg but a great deal of additional work was performed by the very many research volunteers who rotated through the clinical site eight years during which the study was performed.

The paper you read here http://pediatrics.aappublications.org/content/136/5/e1228 There is a pay wall. Foundation supporters should contact us for a free reprint.


Pacifiers to decrease SIDS

posted Mar 29, 2014, 10:45 PM by Admin PEMRF

Pacifier use in the first six months of life has been consistently shown to decrease the risk of sudden infant death syndrome (SIDS). That most parents don't know this fact was a key finding from a research project supported by the foundation.  Older parents seemed less aware of this than younger ones.  Telling parent in he ED worked and increased pacifier use in infants. There didn't seem to be a downside either. There was no increase in ear infections which has been cited as a worry for parents in this regard.  The article published in PeerJ this month. The link is here   https://peerj.com/articles/309/ and the .pdf can be downloaded here https://peerj.com/articles/309.pdf. This is an open access article on the PeerJ platform we 've enthusiastically blogged about before.
 
There's a few posts in this, how we supported the project which is a model of doing more with less, Peer review, and how scientists sometimes forget that somethings just can't in good conscience be randomized.
 

Slushy party for calves

posted Nov 16, 2013, 12:35 PM by Admin PEMRF   [ updated Nov 16, 2013, 3:49 PM ]

  

So one of the odder purchases the foundation has made is slushy cone mix.  And no, we weren’t fund raising among eight-year olds. We are supporting in a peripheral way a study looking at the prostaglandin E hypothesis in bronchiolitis. This is related to the apnea work we have previously supported but addresses the effects on bronchiolitis. This project involved calves, as in mooo!


 Bovine models are expensive and the foundation's effort on this one is dwarfed by the opportunity costs incurred by the PI who is doing this   instead of a more lucrative community practice and NIH and university seed monies. Nonetheless the foundation's role was critical. Big organizations move slowly, drugs arrived late, and when a matching placebo (and personal protective equipment) were urgently needed hours before the experiment was due to start  it was the foundation that quickly stepped in and made the necessary purchases.  For the placebo the matching substances turned out to be slushy mix.  And it worked wonderfully at least as an indistinguishable placebo. As for the experiment more anon...
 Holsteins slushy and ibuprofen.. the key ingredients for the latest research in bronchiolitis in pediatric emergency medicine 



Peer J gives undergraduates a break

posted Aug 25, 2013, 9:51 PM by Admin PEMRF   [ updated Aug 25, 2013, 11:51 PM ]

Remember back last year  we ended a blog entry  with : 

"PS. If anyone in PeerJ is listening: Tweaking membership rules a bit to allow one free article for students and young volunteer research assistants would be incredibly helpful. These are often still teenagers whose hard work  truly does deserve authorship.)"

Well evidently somebody was listening (though probably not to us). Now authors who were undergraduates when the research was being done don't have to find $99. This is very helpful. It also gives no excuse for those who would airbrush volunteer workers off the tutorship line. The link is here.

"Well that's all fine in practice, but how does it work in theory ?"

posted Aug 25, 2013, 7:07 PM by Admin PEMRF   [ updated Aug 25, 2013, 10:16 PM ]


 


 Dr. Garret FitzGerald 1926-2011, Former Taoiseach (Premier) of Ireland,

 architect of the Anglo-Irish Peace agreement,  and an academically 

 inclined economist who once arrived for a public function in one brown and one black shoe.




"Well that's all fine in practice, but how does it work in theory ?"

When presented with a solution to a difficult problem this apparently was former Taoiseach Garrett Fitzgerald's response. This is the kind of answer that gets academics a bad name. It struck me today however in the context of developing rules to predict rare events.

 

The classic 'I made it up because I'm an expert and then I validated it' rule is the apnea rule proposed by Wilwerth et al. After coming up with the rule the authors tested it on a retrospective data set; which apparently they didn't review prior to developing the rule. And their ruled perfectly perfect predicted all the infants with apnea in the data set.

 

The foundation is supporting a project which is currently attempting to derive a rule prospectively. The authors are using the standard logistic regression approach. But even with the adjustment for the rare events when the goal being sought is a model with 100% sensitivity problems arise. Hundred percent sensitivity is required for such things as apnea (it's not really good enough to send home just a few children to die). Even with adjustment for rare events, when the events become very rare and a zero miss rate is required the imbalance in the data makes the task almost impossible. CART analysis didn't help for the same reasons.

 

And yet the authors noted that with a little thought it was fairly easy to derive a series of rules which when tested had 100% sensitivity and as high as 60% specificity. Our researchers could do this because humans can fairly easily bias a classification tool when they create it manually. At first blush such a tool, with 100% sensitivity and specificity of 60% for a potentially fatal outcome in small infants sounds wonderful. But, is this really any different from the exercise performed by Wilwerth et al? One can argue that it is more data informed, done as it was following extensive univariable and multivariable modeling. This is true, however it does not excuse what is happening here. What is happening here, and what happened in Wilwerth et al. is a classic example of over fitting. Predictably, two subsequent authors failed to validate the rule proposed by Wilwerth et al., one a prospective, and one in retrospective data set and both in different centers to that of the authors of Wilwerth et al. The foundation does not want to see its work similarly discredited.

 

The fundamental problem comes down to this. In both cases informed experts are using either their own experience or data and their own experience to craft a classifier which works perfectly in their experience i.e. their  ER   but not anyone else's.  This is simply not good enough. Inevitably the data will be over fit and the classifier not generalizable (to somebody else's ER or infant).

 

As Garrett Fitzgerald said “It's all very well in practice but how does it work in theory?”

 

SMOTE'ing and other oversampling approaches in order to balance the data set can be taken to the extreme such that cases are oversampled in a proportion that effectively penalizes the controls. This is distinct from applying a penalty for false negatives in the evaluation phase of a classifier; although this distinction is not often brought out in the clinical literature. I’ll discuss how this all pans out in a subsequent entry.

 

Foundation supported project publishes in Emergency Medicine Journal

posted Aug 25, 2013, 6:56 PM by Admin PEMRF   [ updated Aug 25, 2013, 9:41 PM ]

Cover

This foundation supported research project which examined the validity of antigen testing for respiratory syncytial virus (RSV) in the emergency department has finally published ahead of print in the emergency medicine journal (EMJ). EM J is the BMJ stable emergency medicine journal. The paper entitled “Is the interpretation of rapid antigen testing for respiratory syncytial virus as simple as positive or negative?”  was a prospective diagnostic study which included 607 infants and toddlers with bronchiolitis. The authors found perhaps unsurprisingly that the antigen test did not perform nearly as well has the manufacturers claim it did. They then took this step further and demonstrated how the relatively poor performance could be improved by interpreting it in the context of other children who present to the emergency department.

This is of course exactly what pediatric emergency physicians always done; if a resident tells us this patient is RSV positive in the middle of June our initial reaction is “Well maybe”.  This study demonstrates both why this is the case and also helps quantify that ‘maybe’ in an unusual graphical portrayal of the contextual logistic regression multivariate analysis.   

The link is here. Unfortunately there is a pay wall.  Foundation supporters can receive a free reprint directly from the the foundation.

On Measuring Agreement

posted Aug 25, 2013, 6:43 PM by Admin PEMRF   [ updated Aug 25, 2013, 9:58 PM ]

Kappa is naughty and  should be sent to bed early

A current project that the foundation is supporting involves measuring inter rater agreement. Pediatric emergency medicine tends to look no further than kappa   (denoted by the Greek letter  κ). This is a pity.  Cohen’s κ was originally designed as a single summary statistic to describe categorical chance adjusted agreement.  Cohen’s κ appears easily interpreted; its range is -1 to +1, implying perfect disagreement and agreement respectively. Descriptive terms such as ‘moderate’ and ‘poor’ agreement have been published to further ease interpretation. Physician researchers in particular seem to like it. A quick jaunt through the EM journals inter rater studies shows it to be nearly as popular as vicodin. The use of a single measure that adjusts for chance agreement is seductive.  So challenging its use is likely to be received, well like a prescription for ibuprofen. But there is a wrinkle with κ.


A disadvantage of this κ statistic is that it results in lower values the further the prevalence of the outcome being studied deviates from 0.5.  Scott’s π (subsequently extended by Fleiss) suffers the same limitations.[4] This so called paradox of κ arises where very high agreement is observed along with a dismal κ statistic is well known to statisticians. Consequently Cohen’s κ and Scott’s π should be avoided when one of the categories being rated is much more or less common than another. An alternative method, the agreement coefficient (AC1) has been proposed by Gwet.  The AC1 is more stable than κ although the AC1 may give slightly lower estimates than κ when the prevalence of a classification approaches 0.5. The AC1 is relatively new and as yet does not appear widely in the medical literature despite recommendations to use it. To run the AC1 you can use the SAS macro or buy the Excel implementation ($45 or so) or use this function in R (don't be scared by the strange url). AC1 does not seem to be implemented in Stata although it would not be difficult to do. AC1 one may not be the panacea it appears either. One of its underlying premises is that chance agreement occurs only if at least one of the raters on occasion rates some individuals randomly. 


For ordinal scales a weighted kappa has been proposed. The penalty for disagreement is weighted according to the number of categories by which the raters disagree. The results are dependent both on the weighting scheme chosen by the analyst and the relative prevalence of the categories. Thoughtful use of weights reduces to the intraclass correlation but of course the dirty little secret is that you *could* set the weights to anything you want. Scott’s π and Gwet’s AC1 can also be weighted. When weighted,  Gwet’s AC1 is often referred to as AC2 and the same weighting caveats apply.

 

A different approach is to regard ordinal categories as bins on a continuous scale. Polychoric correlation estimates the correlation between raters as if they were rating on a continuous scale. Polychoric correlation is at least in principle insensitive to the number of categories and can even be used where raters use different numbers of categories. The correlation coefficient, -1 to +1, is interpreted in the usual manner. A disadvantage of polychoric correlation is that it is susceptible to distribution; although some recognize polychoric correlation as a special case of latent trait modeling thereby allowing relaxation of distribution assumptions.  It is easy to conceive situations where  assumptions of a normal distribution are unlikely to hold.


Another coefficient of agreement “A” proposed by van der Eijk was specifically designed for ordinal scales with a relatively small number of categories dealing with abstract concepts.  This measure “A” is insensitive to standard deviation.  “A” however contemplates large numbers of raters rating a small number of subjects (such as voters rating political parties). This seems less applicable to clinical PEM but might be useful when asking lots of physicians to rate a few management strategies or patients/parents a few hospitals.  This has been implemented in Stata which seems to be the most widely used stats program in the specialty.


Foundation supported project wins undergraduate research prize

posted Aug 25, 2013, 6:27 PM by Admin PEMRF   [ updated Aug 25, 2013, 8:41 PM ]



Carolina Rodriguez won the best undergraduate researcher prize for her oral presentation at the Western Regional meeting of the Society of Academic Emergency Medicine in Long Beach California.  Her presentation presented partial data form the project supported by the foundation to decrease pacifier related SIDS deaths. This was a conceptually simple project building on the notion that research assistants already deployed in the ER could usefully teach parents about SIDS prevention parenting practices.  The lack of knowledge of  the role of pacifiers was quite striking and the foundation believes that although such interventions lack the glamour of the latest molecular biology research they are inexpensive and important in a very  practical way way.   The meeting was held in March.


In praise of fishermen

posted Oct 1, 2012, 10:52 AM by Admin PEMRF   [ updated Aug 25, 2013, 9:01 PM ]

The formalisation of retrospective studies which has been well described elsewhere, (Gibson et al) has had the advantage of increasing the quality of data collected. Even the simplest things, forcing the use of standardised data collection forms, and pre-specified hypotheses has the effect of ensuring that the data that is in the charts and is more truly represented in what is collected in the data sheet. The involvement of multiple data collectors and their training has expanded the scope of what can be gathered from charts. In particular it is expanded the number of charts that can be reviewed.Consequently the quality of retrospective studies can be expected to improve and has improved over time. Individual investigator biases should be less likely to be carried over into research conclusions.

 The disadvantage of this approach is that the rigourous pre-formulation of hypothesis creates the risk that novel observations buried in the data may be missed. It is often said that retrospective studies should be considered only hypothesis generating. (This is only partially true. It may be possible under some circumstances to use retrospective data to prove causation. The occasions when this can happen are limited but it is nonetheless possible.)Hypothesis generation involves a deep knowledge of the subject matter involved, however it also involves a certain amount of serendipity. This can occur when examining large numbers of charts of patients or otherwise reviewing large numbers of cases. This requires a certain amount of flexibility in what one is willing to consider. One approach is to insist on collecting a large number of data points and doing retrospective studies. Having once collected 119 data points per visit from paper charts the prickly cactus understands that this is not trivial undertaking. Moreover it requires extensive training if non-physician data abstractors are used.

 Such approaches are considered data dredging and frowned upon in many circles and but it is only by going on fishing trips that we can hope to find fish. Providing we are willing to recognize the limitations of these, and prospectively validate them, useful information can be discovered. This in the cactus's opinion is far preferable to collecting small datasets retrospectively and then reinforcing old knowledge, but learning nothing new.

 Prospective data collection is also prone to the same problem. In some ways prospective data collection is even more prone to it as there is a certain imperative in ensuring that the data collection forms are not unwieldy. Consequently every question is carefully thought out and the number of questions to be asked is limited. This is of course a good thing, but it risks missing new risks and potentially even causes of disease. Again the risk is the is same as in retrospect of research which is too narrowly focused. Established knowledge gets reinforced, and new knowledge waiting to be discovered remains hidden.

Ironically therefore the advent of electronic medical records, could if used correctly, make retrospective research more exciting than prospective research. At the very least clinical researchers must now be far better trained in database and data management than in the past.

 

 


1-10 of 19