Assumptions about the shape and structure of the function they try to learn, machine learning algorithms can be divided into two categories: parametric and nonparametric.

40315b2115fb075d4ecb3ec88155b755

Parametric model
A learning model that summarizes data with a set of parameters of fixed size (independent of the number of training examples). No matter how much data you throw at a parametric model, it won’t change its mind about how many parameters it needs.

Some examples of parametric machine learning algorithms are:

  • Linear Regression
  • Linear Support Vector Machines
  • Logistic Regression
  • Naive Bayes
  • Perceptron

Benefits of Parametric Machine Learning Algorithms:

  • Simpler: These methods are easier to understand and interpret results.
  • Faster: Parametric models are very fast to learn from data.
  • Less traning Data: They do not require as much training data and can work well even if the fit to the data is not perfect.

Limitations of Parametric Machine Learning Algorithms:

  • Highly Constrained: By choosing a functional form these methods are highly constrained to the specified form.
  • Limited Complexity: The methods are more suited to simpler problems.
  • Poor Fit: In practice the methods are unlikely to match the underlying mapping function.

Nonparametric models
Nonparametric methods are good when you have a lot of data and no prior knowledge, and when you don’t want to worry too much about choosing just the right features.

Some examples of nonparametric models are:

  • Decision Trees
  • K-Nearest Neighbor
  • Support Vector Machines with Gaussian Kernels
  • Artificial Neural Networks

Benefits of Nonparametric Machine Learning Algorithms:

  • Flexibility: Capable of fitting a large number of functional forms.
  • Power: No assumptions (or weak assumptions) about the underlying function.
  • Performance: Can result in higher performance models for prediction.

Limitations of Nonparametric Machine Learning Algorithms:

  • More data: Require a lot more training data to estimate the mapping function.
  • Slower: A lot slower to train as they often have far more parameters to train.
  • Overfitting: More of a risk to overfit the training data and it is harder to explain why specific predictions are made.

Leave a Reply

Your email address will not be published.