Transparency in Healthcare Technology

By Aaron Patzer | April 22, 2019

Online, I’m often attributed to the quote “Tell anyone and everyone your idea without fear they're going to steal it.”

I don’t remember exactly when I said that, but it sounds like something I would say, and more importantly, I believe it.

Originally this was meant in the context of entrepreneurship. So many people I met were afraid to share any details of what they were working on except under NDA.

I’m against that. I’ve shared proprietary things Vital is doing in AI, and more importantly, what we will be doing in the future. I suppose Epic or Cerner or another startup could steal our playbook. Only I know they won’t. Everyone has their own endless roadmaps and todo lists. Transparency is far safer than you think, from a business strategy perspective.

But more importantly than transparency in strategy is transparency in science. Health matters. And when it comes to AI, accuracy matters. Actually, strictly speaking, accuracy doesn’t matter! It’s better to measure the area under the receiver operating curve (AUC or C-statistic) of precision versus recall, and talk to clinicians about whether a false positive or a false negative has more serious implications.

An AUC of 0.50 (50%) for a binary decision means your system is flipping a coin. There’s no hard and fast rule, but anything above 85% is considered good. It all depends, of course, on what you’re trying to predict. We’re usually pretty happy when our models get up into the 90s.

At Vital, we have developed a platform for AI, rather than one-off algorithms. This makes it relatively easy to test new algorithms and neural network architectures. But rather than being a black box, we’ve chosen to publish each of our core algorithms in peer reviewed journals before putting them into production.

In fact, we try to take it a step further: we don’t write the papers. We ask an independent, unpaid researcher to see if they can replicate the results. We have worked with researchers on two papers using natural language processing (NLP) of doctor and nursing notes to predict hospital admission, and likely imaging (X-ray, CT scan, ultrasound) needed.

Another - submitted, and under review - shows that a just 1-3 sentences of a nurses triage note and no other variables can give an AUC of 80%. Combine with vital signs, age and a few other items and you can easily get into the 90%+ sweet spot. It’s amazingly predictive. This algorithmic technique is only a few years old and I’ve not seen it put into any commercial use outside of Silicon Valley.

  • “Prediction of Emergency Department Patient Disposition Based onNatural Language Processing of Triage Notes,” Sterling, et. al., Submitted to Journal of Emergency Medicine.

We could keep it secret. We are, after all, a for profit company - not academic researchers - and this puts us ahead of the competition. However, one of our commitments as a company is transparency. Our techniques should be scrutinized, improved upon by other startups or researchers, so long as they too publish their results. One of the great tragedies of healthcare is it’s secrecy. It’s damn near impossible even to see your competitors software unless looking over the shoulder of a doctor.

In a time when companies like Theranos have eroded confidence in healthcare startups, there’s an obvious solution. Just be honest. Show your work, not just the end result. Let the world pick apart what you’re doing and criticize your techniques. It will make your product better, patients safer and the world of healthcare just a touch better.

Previous