Statistics for Machine Learning
上QQ阅读APP看书,第一时间看更新

Comparison between regression and machine learning models

Linear regression and machine learning models both try to solve the same problem in different ways. In the following simple example of a two-variable equation fitting the best possible plane, regression models try to fit the best possible hyperplane by minimizing the errors between the hyperplane and actual observations. However, in machine learning, the same problem has been converted into an optimization problem in which errors are modeled in squared form to minimize errors by altering the weights.

In statistical modeling, samples are drawn from the population and the model will be fitted on sampled data. However, in machine learning, even small numbers such as 30 observations would be good enough to update the weights at the end of each iteration; in a few cases, such as online learning, the model will be updated with even one observation:

Machine learning models can be effectively parallelized and made to work on multiple machines in which model weights are broadcast across the machines, and so on. In the case of big data with Spark, these techniques are implemented.

Statistical models are parametric in nature, which means a model will have parameters on which diagnostics are performed to check the validity of the model. Whereas machine learning models are non-parametric, do not have any parameters, or curve assumptions; these models learn by themselves based on provided data and come up with complex and intricate functions rather than predefined function fitting.

Multi-collinearity checks are required to be performed in statistical modeling. Whereas, in machine learning space, weights automatically get adjusted to compensate the multi-collinearity problem. If we consider tree-based ensemble methods such as bagging, random forest, boosting, and so on, multi-collinearity does not even exist, as the underlying model is a decision tree, which does not have a multi-collinearity problem in the first place.

With the evolution of big data and distributed parallel computing, more complex models are producing state-of-the-art results which were impossible with past technology.