Sunday, December 9, 2007

Algorithms and Validation


A friend of mine asked me "If the framingham risk assessment fails to take family history into account, then why do we use it to guide anti-cholesterol therapy?" My answer was "It is scientifically validated."

In medicine we like to do things based on evidence. It is true that we do many things that do not have solid evidence behind them. But we always try to acquire data and then make a rational conclusion, leading to a treatment. When it comes to risk prognostication, validation studies are extremely helpful. And often keep us from getting sued.

So what did the Framingham do to be validated?

The Framingham Heart Study and the Framingham Offspring Study were the first epidemiological studies that prospectively collected population based data on the association between risk factors and the occurrence of fatal and non-fatal coronary and other cardiovascular events in a systematic and sustained fashion. It has been dissected for it's validity over the years.

So can we use the Framingham for everyone? Well, in Europe they tried. Several articles like this one show that the risk tool must be validated in the population you plan to use it for. The Framingham doesn't work so well on the Dutch. However when modified by the REGICOR, Spain's NHLBI, it seemed to perform well for the Spanish. The same with the Chinese modification.

You may now be asking what the heck does this have to do with Personalized Medicine. My answer....Everything. You see part of personalized medicine is prediction. That's why Helix Health of Connecticut trademarked "Prediction, Prevention, Privacy" These are the pillars of genomic medicine.

How can you predict the likelihood of Alzheimers in 5 years? Well, there are some corporate genomic companies doing it without having ever submitted articles for peer reviewed publication. They have "Trade Secret" algorithms that calculate risk. What the hell? How can you trust the accuracy of an algorithm without validation?

This is why we advise against using the Gail model for breast cancer risk. It can only work for certain populations, absolutely not for African Americans. There are new attempts at this type of risk stratification, and several attempts to defend the Gail model. But what has evolved is even more important. New algorithms...that were put to the test and peer reviewed.

Which brings me to my last point. What good is a tool if you don't know how to use it, or who it works for? A recent post on Wingedpig points this out. Confusion as to the tools. But what I wonder is what tools they used to create the tools. Would they publish their algorithm? Should they have to? Should other companies who will foray into medicine have to? As for SNPedia...a great resource, but the results are just like a wikipedia....not exactly peer reviewed.

The Sherpa Says:
The votes are in. I am surprised of the results. 23 and Me is the big winner. Why? Well, they specifically state that they do not intend their tools to be used for medicine. Yet all the posts I read have authors who mistakenly are using it as a medical tool. (See the genealogists post "when will they learn") I would have thought the readers would have chosen Navigenics. Navigenics WILL use their tool as a medical device! So I have to ask them. Where are your data on algorithms? Where did you publish and validate them? Which algorithms are you using? I guess we will find out soon enough. 2008 is right around the corner.


1 comment:

cariaso said...

perhaps I'm not getting it, but you are linking to 23andme.org which is a squatter.