In my last blog, I talked about applying machine learning to analyze complex data
such as IVF and fertility health data. Here, I’ll discuss where predictive medicine and conventional clinical research part ways: It’s all in the assumptions. Please feel free to send us a comment or question at email@example.com.
To apply machine learning to predict IVF outcomes, we first had to peel away some long-held assumptions in fertility statistics. We assumed instead that we did not know the following: 1) That chronological age is the primary predictor of fertility, upon which adjustments should be made for other predictors; 2) Which particular threshold of BMI decreased the chances of success; 3) That a patient’s diagnosis of polycystic ovarian syndrome affected her outcomes, and so on.
In doing so, we opened the way to answering many complex questions. For a 39-year-old patient, does a BMI of 28 have more or less weighting on her IVF outcomes, for example, than her diagnosis of PCOS, her partner’s low sperm count, or the fact that she smokes? If she has already tried one IVF cycle, how do these predictors weigh relative to her response to treatment or embryo quality in her failed IVF cycle?
Not making assumptions a priori is quite novel in clinical medicine, where treatment is fundamentally tied to target biological mechanisms of disease. But in predictive medicine, factors that are strong predictors may not necessarily be mechanistic, and mechanistic factors may not be the strongest predictors. So it becomes key to identify the factors that have true predictive power, and the unique contribution of each predictor. For example, although aging is an important biological force causing diminished ovarian reserve, and chronological age serves as a useful guide for patient populations as a whole, age by itself may be a poor clinical predictor for many patients. Further, chronological age, anti-mullerian inhibiting hormone (AMH), Day 3 FSH and antral follicle count all predict IVF outcomes, but they carry overlapping and non-overlapping predictive information. Therefore, it is important to extract the non-redundant predictive information carried by each factor, so as not to over- or under-estimate a patient’s chances.
Now let’s add another level of complexity. Varying levels of BMI may have a different impact on treatment success and IVF outcomes, depending on the rest of the patient’s clinical profile (e.g. age, Day 3 FSH, etc.). Therefore, we design our predictive models to weigh body mass index relative to other factors in the patient’s profile. Now imagine doing that for each factor. As you can guess, we needed powerful tools to carry out intense computations required to answer these questions.
We applied boosted tree, a method that is well established in machine learning. The boosted tree takes the cases from the training set (i.e., the data set that is used to train or teach the model) and determines how to best sort them based on the variables (or predictors) that are provided. In my next blog, I will provide examples of how the boosted tree can be used to analyze IVF data.