The hybrid approach to machine and deep learning
Enter the blackbox conundrum. Deep learning is a technique that utilizes vast amounts of data and applies algorithms to iterate through and discover patterns. By doing so, a model utilizing a hidden layer is created that can connect inputs to outputs following the discovered pattern. And the potential problem is easy to spot: The hidden layer is just that, it is hidden.
On the face of it, this concept does not seem to pose a problem. Yet, consider the usage of deep learning models: Anything ranging from image analysis to self driving cars. In this latter application, the hidden layer gets tapped to make potentially lethal decisions. This layer is hidden from us; we actually don’t understand the decision making and cannot introspect the model. The problems are clear. Deep learning researchers will quickly point out that we human beings are not always able to explain our own decision making either. That is, after all, why we have the professions of psychology, psychoanalysis, etc. Until we can put a deep learning model under hypnosis, we won’t be able to apply much of that.
What if, however, we were able to calibrate the deep learning model. We propose just such an ensemble, and the calibrating component utilizes a proprietary ontology.
Born out of decades of bioinformatics research (its fruits are at the heart of the web’s and Tim Berners-Lee’s drive towards a semantic web), ontologies are a methodology to encapsulate first order knowledge, i.e., things we actually know. This ground truth encapsulation allows reasoning against the entire logic construct, as supposed to the prior attempts in this area, rules engines, which fired sequentially through the rule set and even the order of the rules could affect outcome. Our hypothesis is the calibration of more fuzzy deep learning outcomes to known nodes in the fully reasoned ontology.
There are many scenarios to test our hypothesis but we decided on a model for personality born from social speech (client patent pending).