Tag: estimation in supervised learning
Missing data is one of the most common problems in almost all statistical analyses. If the data is not available for all the observations of variables in the model, then it is a case of ‘missing data’.
estimation in supervised learning, Supervised learning, trend analysis
Markov chain is one of the most important tests in order to deal with independent trials processes. There are two major principal theorems for these processes. The first one is the ‘Law of Large Numbers’ and the second one is the ‘Central Limit Theorem’.
estimation in supervised learning, Supervised learning
Bootstrap and jackknife are superficially similar statistical techniques that involve re-sampling the data. They are nonparametric and specific resampling techniques that can estimate standard errors and confidence intervals of a population parameter.
estimation in supervised learning, Supervised learning
Thus to assess the model, a common practice in data science is to iterate over various models and select the most appropriate model. In other words it is important to test the same model with different values of parameters.This is called the cross validation method.
estimation in supervised learning, Supervised learning
Monte Carlo simulation is an extension of statistical analysis where simulated data is produced. This method uses repeated sampling techniques to generate simulated data.
estimation in supervised learning, Supervised learning, trend analysis