Co-founding Partners at Interpretable AI, Jack Dunn and Daisy Zhuo believe that in order to unlock the full potential of artificial intelligence, its decisions must be fully explainable and understood by all relevant stakeholders. This means humans working with machines, not blindly following them. Perhaps more importantly, this means humans trusting machines and the decisions they make.
The dominant approaches for generating interpretable models like decision trees were developed in the 80s, when computing power was limited, and have not evolved over time to better exploit increases in computation power. However, Interpretable AI technologies leverage advances in modern optimization to revisit these issues from a fresh perspective, providing much better performance than what has come to be expected from interpretable models.
“Traditionally, practitioners have had to choose between models that are interpretable and models that have good performance,” says Dunn. “Our core product, Optimal Decision Trees, is able to produce a simple decision path that humans can follow. It mimics the human decision-making process while maintaining the same level of performance as deep learning systems.”
Interpretable AI has recently secured a partnership that will allow them to bring their technology to a number of large retailers in the US and abroad. One area of retail that is already seeing the impact of artificial intelligence is assortment planning (i.e., considering financial objectives while determining which products will be on offer where and when). Dunn and Zhuo aim to disrupt that space in a whole new way.
Our technology will allow retailers to really get into the data and understand what drives sales, how to prepare for new product releases, how to stock them, where to ship them, and really how to harness the power of AI without wondering whether they can trust what it’s telling them.
Their products are based on years of research under the guidance of their Co-founding Partner, Dimitris Bertisimas, who is also Co-director of the Operations Research Center at MIT, where Dunn and Zhuo received their PhDs. Both cite the Institute’s mission as an important aspect to their desire to take cutting edge research out into the world, rethinking and improving the status quo.
“Being at MIT, this whole culture of working together with industry on real problems helped us to speed up our methodological development to have an impact in the world—it’s why we’re driven to make sure that the work we do in the lab can create value in the world,” says Dunn. “Through our consulting engagements in healthcare and insurance, we saw a real need for interpretability and scalability,” says Zhuo. “That’s why we designed our method from the ground up to fit this particular need.”
In the medical field, their technology has already been embraced by two of the world’s leading institutions. Their “risk-calculator,” which helps make quick decisions about whether or not a patient needs surgery, or what type of surgery to perform, is being used on a daily basis in the emergency room at Massachusetts General Hospital.
It works like this: Let’s say a doctor wants to predict a patient's risk of post-surgery acute renal failure. The doctor answers a handful of questions prompted by Dunn and Zhuo’s technology. For example, Is the patient's creatinine level below 2.5mg/dl? Is the patient currently on dialysis? Is the patient currently on mechanical ventilation? Based on the responses, the model calculates an accurate prediction of risk of acute renal failure after surgery. In addition, thanks to the transparent nature of Predictive AI’s Optimal Decision Trees, the doctor can see the medical rationale and data behind the prediction.
The team at Interpretable AI have also worked hand-in-hand with oncologists at Dana Farber to develop a “cancer mortality predictor.” The technology takes into account the variables, values, and logic required to determine the best treatment path for patients. The transparent nature of its decision-making feature allows oncologists to verify their intuition while providing patients with a clearer understanding of their options, thereby empowering both stakeholders in a difficult process often fraught with anxiety. Interpretable AI are working to have this new product in a number of large cancer hospitals throughout the US.
Unlike other approaches to interpretable AI, which use a post-hoc, guessing game approach to explainability, the models at Interpretable AI are built from the ground up to be interpretable from the beginning. Zhuo uses the example of a risk scoring system in banking. A practitioner will train a deep neural network that might predict a person has an 80 percent chance of default, at which point, after seeing the percentage, the practitioner will make an attempt at explainability.
“These approaches are local in nature, only considering how small changes to the person’s characteristics affect the prediction,” says Zhuo. “Whereas with our Optimal Decision Trees approach, you can also see globally why a particular person falls into a particular group, and why the decision was made to segment people in that manner,” says Zhuo.
Today, 15 percent of enterprises are using AI, but 31 percent are expected to incorporate it in the next year. With their powerful combination of thoroughly interpretable, high performance algorithms that are utterly unique to the market, Dunn and Zhuo are intent on taking their technology to industries with high regulatory requirements that may previously have been hesitant to use artificial intelligence: the banking and insurance industries, for example.
“It’s not enough for a bank to tell an applicant that their black box method has denied them a loan,” says Dunn. “But our fully explainable methods can output the series of variables and decisions that led to the loan being denied, while still giving the bank the ability to harness the increased predictive power of AI.” This not only helps banks give better risk estimates but also provides the consumer with a never-before-seen level of transparency regarding the decision-making process. Dunn and Zhuo believe their current collaborations will establish a more transparent, equitable process, thereby leading to greater customer engagement and eventually, long-term industry growth.