By Megan Czasonis, Mark Kritzman and David Turkington
We show that relevance-based prediction captures complex relationships, like a neural network, but with the added benefit of transparency.
Many prediction tasks in economics and finance lie beyond the reach of linear regression analysis. Researchers, therefore, often turn to machine learning techniques, such as neural networks, to address these complex dynamics. A neural network has the potential to extract nearly all the useful information from a dataset, however it is difficult to implement and notoriously opaque. Alternatively, relevance-based prediction is a model free and theoretically-grounded approach that forms a prediction as a relevance-weighted average of past outcomes. In a sample application to predicting stock market volatility, we show that relevance-based prediction captures complex relationships like a neural network. However, unlike a neural network, it is remarkably transparent, revealing how each observation and variable contributes to a prediction, and disclosing the reliability of a prediction in advance.