Jump to content

Statistical classification: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Removed incorrect and fallacious statement - statistical classification is not "an example of pattern recognition"
m disambiguation no longer needed; target is no longer a disambiguation page, removed: {{disambiguation needed|date=April 2024}}
(31 intermediate revisions by 20 users not shown)
Line 1: Line 1:
{{Short description|Categorization of data using statistics}}
{{for| clustering approach|Cluster analysis}}
When [[classification]] is performed by a computer, statistical methods are normally used to develop the algorithm.
{{short description|Term in statistical classification}}
In [[statistics]], '''classification''' is the problem of identifying to which of a set of [[categorical data|categories]] (sub-populations) an [[observation]], (or observations) belongs to. Examples are assigning a given email to the [[Spam filtering|"spam" or "non-spam"]] class, and assigning a diagnosis to a given patient based on observed characteristics of the patient (sex, blood pressure, presence or absence of certain symptoms, etc.).


Often, the individual observations are analyzed into a set of quantifiable properties, known variously as [[explanatory variables]] or ''features''. These properties may variously be [[categorical data|categorical]] (e.g. "A", "B", "AB" or "O", for [[blood type]]), [[ordinal data|ordinal]] (e.g. "large", "medium" or "small"), [[integer|integer-valued]] (e.g. the number of occurrences of a particular word in an [[email]]) or [[real number|real-valued]] (e.g. a measurement of [[blood pressure]]). Other classifiers work by comparing observations to previous observations by means of a [[similarity function|similarity]] or [[metric (mathematics)|distance]] function.
Often, the individual observations are analyzed into a set of quantifiable properties, known variously as [[explanatory variables]] or ''features''. These properties may variously be [[categorical data|categorical]] (e.g. "A", "B", "AB" or "O", for [[blood type]]), [[ordinal data|ordinal]] (e.g. "large", "medium" or "small"), [[integer|integer-valued]] (e.g. the number of occurrences of a particular word in an [[email]]) or [[real number|real-valued]] (e.g. a measurement of [[blood pressure]]). Other classifiers work by comparing observations to previous observations by means of a [[similarity function|similarity]] or [[metric (mathematics)|distance]] function.
Line 19: Line 18:
==Frequentist procedures==
==Frequentist procedures==


Early work on statistical classification was undertaken by [[Ronald Fisher|Fisher]],<ref>{{Cite journal |doi = 10.1111/j.1469-1809.1936.tb02137.x|title = The Use of Multiple Measurements in Taxonomic Problems|year = 1936|last1 = Fisher|first1 = R. A.|journal = [[Annals of Eugenics]]|volume = 7|issue = 2|pages = 179–188|hdl = 2440/15227|hdl-access = free}}</ref><ref>{{Cite journal |doi = 10.1111/j.1469-1809.1938.tb02189.x|title = The Statistical Utilization of Multiple Measurements|year = 1938|last1 = Fisher|first1 = R. A.|journal = [[Annals of Eugenics]]|volume = 8|issue = 4|pages = 376–386|hdl = 2440/15232|hdl-access = free}}</ref> in the context of two-group problems, leading to [[Fisher's linear discriminant]] function as the rule for assigning a group to a new observation.<ref name=G1977>Gnanadesikan, R. (1977) ''Methods for Statistical Data Analysis of Multivariate Observations'', Wiley. {{ISBN|0-471-30845-5}} (p. 83&ndash;86)</ref> This early work assumed that data-values within each of the two groups had a [[multivariate normal distribution]]. The extension of this same context to more than two-groups has also been considered with a restriction imposed that the classification rule should be [[linear]].<ref name=G1977/><ref>[[C. R. Rao|Rao, C.R.]] (1952) ''Advanced Statistical Methods in Multivariate Analysis'', Wiley. (Section 9c)</ref> Later work for the multivariate normal distribution allowed the classifier to be [[nonlinear]]:<ref>[[T. W. Anderson|Anderson, T.W.]] (1958) ''An Introduction to Multivariate Statistical Analysis'', Wiley.</ref> several classification rules can be derived based on different adjustments of the [[Mahalanobis distance]], with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.
Early work on statistical classification was undertaken by [[Ronald Fisher|Fisher]],<ref>{{Cite journal |doi = 10.1111/j.1469-1809.1936.tb02137.x|title = The Use of Multiple Measurements in Taxonomic Problems|year = 1936|last1 = Fisher|first1 = R. A.|journal = [[Annals of Eugenics]]|volume = 7|issue = 2|pages = 179–188|hdl = 2440/15227|hdl-access = free}}</ref><ref>{{Cite journal |doi = 10.1111/j.1469-1809.1938.tb02189.x|title = The Statistical Utilization of Multiple Measurements|year = 1938|last1 = Fisher|first1 = R. A.|journal = [[Annals of Eugenics]]|volume = 8|issue = 4|pages = 376–386|hdl = 2440/15232|hdl-access = free}}</ref> in the context of two-group problems, leading to [[Fisher's linear discriminant]] function as the rule for assigning a group to a new observation.<ref name=G1977>Gnanadesikan, R. (1977) ''Methods for Statistical Data Analysis of Multivariate Observations'', Wiley. {{ISBN|0-471-30845-5}} (p. 83&ndash;86)</ref> This early work assumed that data-values within each of the two groups had a [[multivariate normal distribution]]. The extension of this same context to more than two groups has also been considered with a restriction imposed that the classification rule should be [[linear]].<ref name=G1977/><ref>[[C. R. Rao|Rao, C.R.]] (1952) ''Advanced Statistical Methods in Multivariate Analysis'', Wiley. (Section 9c)</ref> Later work for the multivariate normal distribution allowed the classifier to be [[nonlinear]]:<ref>[[T. W. Anderson|Anderson, T.W.]] (1958) ''An Introduction to Multivariate Statistical Analysis'', Wiley.</ref> several classification rules can be derived based on different adjustments of the [[Mahalanobis distance]], with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.


==Bayesian procedures==
==Bayesian procedures==
Line 25: Line 24:
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population.<ref>{{Cite journal |doi = 10.1093/biomet/65.1.31|title = Bayesian cluster analysis|year = 1978|last1 = Binder|first1 = D. A.|journal = [[Biometrika]]|volume = 65|pages = 31–38}}</ref> Bayesian procedures tend to be computationally expensive and, in the days before [[Markov chain Monte Carlo]] computations were developed, approximations for Bayesian clustering rules were devised.<ref>{{Cite journal | doi=10.1093/biomet/68.1.275| title=Approximations to Bayesian clustering rules| year=1981| last1=Binder| first1=David A.| journal=[[Biometrika]]| volume=68| pages=275–285}}</ref>
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population.<ref>{{Cite journal |doi = 10.1093/biomet/65.1.31|title = Bayesian cluster analysis|year = 1978|last1 = Binder|first1 = D. A.|journal = [[Biometrika]]|volume = 65|pages = 31–38}}</ref> Bayesian procedures tend to be computationally expensive and, in the days before [[Markov chain Monte Carlo]] computations were developed, approximations for Bayesian clustering rules were devised.<ref>{{Cite journal | doi=10.1093/biomet/68.1.275| title=Approximations to Bayesian clustering rules| year=1981| last1=Binder| first1=David A.| journal=[[Biometrika]]| volume=68| pages=275–285}}</ref>


Some Bayesian procedures involve the calculation of [[class membership probabilities|group membership probabilities]]: these provide a more informative outcome than a simple attribution of a single group-label to each new observation.
Some Bayesian procedures involve the calculation of [[group-membership probabilities]]: these provide a more informative outcome than a simple attribution of a single group-label to each new observation.


==Binary and multiclass classification==
==Binary and multiclass classification==
Line 31: Line 30:


== Feature vectors ==
== Feature vectors ==
{{main|Feature vector}}

Most algorithms describe an individual instance whose category is to be predicted using a [[feature vector]] of individual, measurable properties of the instance. Each property is termed a [[feature (pattern recognition)|feature]], also known in statistics as an [[explanatory variable]] (or [[independent variable]], although features may or may not be [[statistically independent]]). Features may variously be [[binary data|binary]] (e.g. "on" or "off"); [[categorical data|categorical]] (e.g. "A", "B", "AB" or "O", for [[blood type]]); [[ordinal data|ordinal]] (e.g. "large", "medium" or "small"); [[integer|integer-valued]] (e.g. the number of occurrences of a particular word in an email); or [[real number|real-valued]] (e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data be ''discretized'' into groups (e.g. less than 5, between 5 and 10, or greater than 10).
Most algorithms describe an individual instance whose category is to be predicted using a [[feature vector]] of individual, measurable properties of the instance. Each property is termed a [[feature (pattern recognition)|feature]], also known in statistics as an [[explanatory variable]] (or [[independent variable]], although features may or may not be [[statistically independent]]). Features may variously be [[binary data|binary]] (e.g. "on" or "off"); [[categorical data|categorical]] (e.g. "A", "B", "AB" or "O", for [[blood type]]); [[ordinal data|ordinal]] (e.g. "large", "medium" or "small"); [[integer|integer-valued]] (e.g. the number of occurrences of a particular word in an email); or [[real number|real-valued]] (e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data be ''discretized'' into groups (e.g. less than 5, between 5 and 10, or greater than 10).


== Linear classifiers ==
==Linear classifiers==
{{main|Linear classifier}}
{{main|Linear classifier}}
A large number of [[algorithm]]s for classification can be phrased in terms of a [[linear function]] that assigns a score to each possible category ''k'' by [[linear combination|combining]] the feature vector of an instance with a vector of weights, using a [[dot product]]. The predicted category is the one with the highest score. This type of score function is known as a [[linear predictor function]] and has the following general form:
A large number of [[algorithm]]s for classification can be phrased in terms of a [[linear function]] that assigns a score to each possible category ''k'' by [[linear combination|combining]] the feature vector of an instance with a vector of weights, using a [[dot product]]. The predicted category is the one with the highest score. This type of score function is known as a [[linear predictor function]] and has the following general form:
<math display=block>\operatorname{score}(\mathbf{X}_i, k) = \boldsymbol\beta_k \cdot \mathbf{X}_i,</math>

where '''X'''<sub>''i''</sub> is the feature vector for instance ''i'', '''β'''<sub>''k''</sub> is the vector of weights corresponding to category ''k'', and score('''X'''<sub>''i''</sub>, ''k'') is the score associated with assigning instance ''i'' to category ''k''. In [[discrete choice]] theory, where instances represent people and categories represent choices, the score is considered the [[utility]] associated with person ''i'' choosing category ''k''.
:<math>\operatorname{score}(\mathbf{X}_i,k) = \boldsymbol\beta_k \cdot \mathbf{X}_i,</math>

where '''X'''<sub>''i''</sub> is the feature vector for instance ''i'', '''&beta;'''<sub>''k''</sub> is the vector of weights corresponding to category ''k'', and score('''X'''<sub>''i''</sub>, ''k'') is the score associated with assigning instance ''i'' to category ''k''. In [[discrete choice]] theory, where instances represent people and categories represent choices, the score is considered the [[utility]] associated with person ''i'' choosing category ''k''.


Algorithms with this basic setup are known as [[linear classifier]]s. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted.
Algorithms with this basic setup are known as [[linear classifier]]s. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted.


Examples of such algorithms are
Examples of such algorithms include
*[[Logistic regression]] and [[Multinomial logistic regression]]
* {{annotated link|Logistic regression}}
*[[Probit regression]]
** {{annotated link|Multinomial logistic regression}}
* {{annotated link|Probit regression}}
*The [[perceptron]] algorithm
* The [[perceptron]] algorithm
*[[Support vector machine]]s
* {{annotated link|Support vector machine}}
*[[Linear discriminant analysis]].
* {{annotated link|Linear discriminant analysis}}


== Algorithms ==
==Algorithms==


Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms has been developed. The most commonly used include:<ref>{{Cite news|url=https://builtin.com/data-science/tour-top-10-algorithms-machine-learning-newbies|title=A Tour of The Top 10 Algorithms for Machine Learning Newbies|date=2018-01-20|work=Built In|access-date=2019-06-10}}</ref>
In [[unsupervised learning]], classifiers form the backbone of cluster analysis and in [[Supervised learning|supervised]] or semi-supervised learning, classifiers are how the system characterizes and evaluates unlabeled data. In all cases though, classifiers have a specific set of dynamic rules, which includes an interpretation procedure to handle vague or unknown values, all tailored to the type of inputs being examined.<ref>{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/classifier|title=What is a Classifier in Machine Learning?}}</ref>


* {{annotated link|Artificial neural networks}}
Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms have been developed. The most commonly used include:<ref>{{Cite news|url=https://builtin.com/data-science/tour-top-10-algorithms-machine-learning-newbies|title=A Tour of The Top 10 Algorithms for Machine Learning Newbies|date=2018-01-20|work=Built In|access-date=2019-06-10}}</ref>
* {{annotated link|Boosting (meta-algorithm)}}
* {{annotated link|Decision tree learning}}
** {{annotated link|Random forest}}
* {{annotated link|Genetic programming}}
** {{annotated link|Gene expression programming}}
** {{annotated link|Multi expression programming}}
** {{annotated link|Linear genetic programming}}
* {{annotated link|Kernel estimation|text=Variable kernel density estimation#Use for statistical classification}}
** {{annotated link|k-nearest neighbor algorithm|k-nearest neighbor}}
* {{annotated link|Learning vector quantization}}
* {{annotated link|Linear classifier}}
** {{annotated link|Fisher's linear discriminant}}
** {{annotated link|Logistic regression}}
** {{annotated link|Naive Bayes classifier}}
** {{annotated link|Perceptron}}
* {{annotated link|Quadratic classifier}}
* {{annotated link|Support vector machine}}
** {{annotated link|Least squares support vector machine}}


Choices between different possible algorithms are frequently made on the basis of quantitative [[Classification#Evaluation of accuracy|evaluation of accuracy]].
* [[Linear classifier]]s
** [[Fisher's linear discriminant]]
** [[Logistic regression]]
** [[Naive Bayes classifier]]
** [[Perceptron]]
*[[Support vector machine]]s
**[[Least squares support vector machine]]s
* [[Quadratic classifier]]s
* [[Variable kernel density estimation#Use for statistical classification|Kernel estimation]]
** [[k-nearest neighbor algorithm|k-nearest neighbor]]
* [[Boosting (meta-algorithm)]]
* [[Decision tree learning|Decision tree]]s
** [[Random forest]]s
* [[Artificial neural networks|Neural network]]s
* [[Learning vector quantization]]

== Evaluation ==
Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems (a phenomenon that may be explained by the [[No free lunch in search and optimization|no-free-lunch theorem]]). Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than a science.

The measures [[precision and recall]] are popular metrics used to evaluate the quality of a classification system. More recently, [[receiver operating characteristic]] (ROC) curves have been used to evaluate the tradeoff between true- and false-positive rates of classification algorithms.

As a performance metric, the [[uncertainty coefficient]] has the advantage over simple [[accuracy]] in that it is not affected by the relative sizes of the different classes.
<ref name="Mills2010">
{{Cite journal
| author = Peter Mills
| title = Efficient statistical classification of satellite measurements
| journal = [[International Journal of Remote Sensing]]
| volume = 32
| issue = 21
| pages = 6109–6132
| doi= 10.1080/01431161.2010.507795
| year = 2011
| arxiv = 1202.2194
| bibcode = 2011IJRS...32.6109M
| s2cid = 88518570
}}</ref>
Further, it will not penalize an algorithm for simply ''rearranging'' the classes.


==Application domains==
==Application domains==
{{see also|Cluster analysis#Applications}}
{{see also|Cluster analysis#Applications}}
Classification has many applications. In some of these it is employed as a [[data mining]] procedure, while in others more detailed statistical modeling is undertaken.
Classification has many applications. In some of these, it is employed as a [[data mining]] procedure, while in others more detailed statistical modeling is undertaken.


* {{annotated link|Biological classification}}
* [[Computer vision]]
* {{annotated link|Biometric}} identification
** [[Medical imaging]] and medical image analysis
* {{annotated link|Computer vision}}
** [[Optical character recognition]]
** Medical image analysis and {{annotated link|medical imaging}}
** [[Video tracking]]
** {{annotated link|Optical character recognition}}
* [[Drug discovery]] and [[Drug development|development]]
** {{annotated link|Video tracking}}
** [[Toxicogenomics]]
* {{annotated link|Credit scoring}}
** [[Quantitative structure-activity relationship]]
* {{annotated link|Document classification}}
* [[Geostatistics]]
* [[Drug discovery]] and {{annotated link|Drug development|development}}
* [[Speech recognition]]
** {{annotated link|Toxicogenomics}}
* [[Handwriting recognition]]
** {{annotated link|Quantitative structure-activity relationship}}
* [[Biometric]] identification
* {{annotated link|Geostatistics}}
*[[Biological classification]]
* {{annotated link|Handwriting recognition}}
* [[Statistical natural language processing]]
* Internet {{annotated link|search engines}}
* [[Document classification]]
* [[Micro-array classification]]
* Internet [[search engines]]
* {{annotated link|Pattern recognition}}
* [[Credit scoring]]
* {{annotated link|Recommender system}}
* [[Pattern recognition]]
* {{annotated link|Speech recognition}}
* [[Recommender system]]
* {{annotated link|Statistical natural language processing}}
*Micro-array classification


{{More footnotes|date=January 2010}}
{{More footnotes needed|date=January 2010}}


== See also ==
==See also==
{{Commons category}}
{{Portal|Mathematics}}
{{Portal|Mathematics}}
{{colbegin}}
{{colbegin}}
* [[Artificial intelligence]]
* {{annotated link|Artificial intelligence}}
* [[Binary classification]]
* {{annotated link|Binary classification}}
* {{annotated link|Multiclass classification}}
* [[Class membership probabilities]]
* {{annotated link|Class membership probabilities}}
* [[Classification rule]]
* {{annotated link|Classification rule}}
* [[Compound term processing]]
* {{annotated link|Compound term processing}}
* [[Data mining]]
* {{annotated link|Confusion matrix}}
* [[Data warehouse]]
* {{annotated link|Data mining}}
* [[Fuzzy logic]]
* {{annotated link|Data warehouse}}
* [[Information retrieval]]
* {{annotated link|Fuzzy logic}}
* [[List of datasets for machine learning research]]
* {{annotated link|Information retrieval}}
* [[Machine learning]]
* {{annotated link|List of datasets for machine learning research}}
* [[Recommender system]]
* {{annotated link|Machine learning}}
* {{annotated link|Recommender system}}
{{colend}}
{{colend}}


==References==
==References==
{{Commons category}}
{{Reflist}}
{{Reflist}}


{{Statistics|analysis||state=expanded}}
{{Statistics|analysis||state=expanded}}
{{Authority control}}


{{DEFAULTSORT:Statistical Classification}}
{{DEFAULTSORT:Statistical Classification}}
[[Category:Statistical classification| ]]
[[Category:Statistical classification| ]]
[[Category:Machine learning]]
[[Category:Classification algorithms|*]]
[[Category:Classification algorithms|*]]

Revision as of 13:28, 18 June 2024

When classification is performed by a computer, statistical methods are normally used to develop the algorithm.

Often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or features. These properties may variously be categorical (e.g. "A", "B", "AB" or "O", for blood type), ordinal (e.g. "large", "medium" or "small"), integer-valued (e.g. the number of occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure). Other classifiers work by comparing observations to previous observations by means of a similarity or distance function.

An algorithm that implements classification, especially in a concrete implementation, is known as a classifier. The term "classifier" sometimes also refers to the mathematical function, implemented by a classification algorithm, that maps input data to a category.

Terminology across fields is quite varied. In statistics, where classification is often done with logistic regression or a similar procedure, the properties of observations are termed explanatory variables (or independent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of the dependent variable. In machine learning, the observations are often known as instances, the explanatory variables are termed features (grouped into a feature vector), and the possible categories to be predicted are classes. Other fields may use different terminology: e.g. in community ecology, the term "classification" normally refers to cluster analysis.

Relation to other problems

Classification and clustering are examples of the more general problem of pattern recognition, which is the assignment of some sort of output value to a given input value. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence; etc.

A common subclass of classification is probabilistic classification. Algorithms of this nature use statistical inference to find the best class for a given instance. Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output a probability of the instance being a member of each of the possible classes. The best class is normally then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classifiers:

  • It can output a confidence value associated with its choice (in general, a classifier that can do this is known as a confidence-weighted classifier).
  • Correspondingly, it can abstain when its confidence of choosing any particular output is too low.
  • Because of the probabilities which are generated, probabilistic classifiers can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of error propagation.

Frequentist procedures

Early work on statistical classification was undertaken by Fisher,[1][2] in the context of two-group problems, leading to Fisher's linear discriminant function as the rule for assigning a group to a new observation.[3] This early work assumed that data-values within each of the two groups had a multivariate normal distribution. The extension of this same context to more than two groups has also been considered with a restriction imposed that the classification rule should be linear.[3][4] Later work for the multivariate normal distribution allowed the classifier to be nonlinear:[5] several classification rules can be derived based on different adjustments of the Mahalanobis distance, with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.

Bayesian procedures

Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population.[6] Bayesian procedures tend to be computationally expensive and, in the days before Markov chain Monte Carlo computations were developed, approximations for Bayesian clustering rules were devised.[7]

Some Bayesian procedures involve the calculation of group-membership probabilities: these provide a more informative outcome than a simple attribution of a single group-label to each new observation.

Binary and multiclass classification

Classification can be thought of as two separate problems – binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes.[8] Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers.

Feature vectors

Most algorithms describe an individual instance whose category is to be predicted using a feature vector of individual, measurable properties of the instance. Each property is termed a feature, also known in statistics as an explanatory variable (or independent variable, although features may or may not be statistically independent). Features may variously be binary (e.g. "on" or "off"); categorical (e.g. "A", "B", "AB" or "O", for blood type); ordinal (e.g. "large", "medium" or "small"); integer-valued (e.g. the number of occurrences of a particular word in an email); or real-valued (e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data be discretized into groups (e.g. less than 5, between 5 and 10, or greater than 10).

Linear classifiers

A large number of algorithms for classification can be phrased in terms of a linear function that assigns a score to each possible category k by combining the feature vector of an instance with a vector of weights, using a dot product. The predicted category is the one with the highest score. This type of score function is known as a linear predictor function and has the following general form: where Xi is the feature vector for instance i, βk is the vector of weights corresponding to category k, and score(Xi, k) is the score associated with assigning instance i to category k. In discrete choice theory, where instances represent people and categories represent choices, the score is considered the utility associated with person i choosing category k.

Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted.

Examples of such algorithms include

Algorithms

Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms has been developed. The most commonly used include:[9]

Choices between different possible algorithms are frequently made on the basis of quantitative evaluation of accuracy.

Application domains

Classification has many applications. In some of these, it is employed as a data mining procedure, while in others more detailed statistical modeling is undertaken.

See also

References

  1. ^ Fisher, R. A. (1936). "The Use of Multiple Measurements in Taxonomic Problems". Annals of Eugenics. 7 (2): 179–188. doi:10.1111/j.1469-1809.1936.tb02137.x. hdl:2440/15227.
  2. ^ Fisher, R. A. (1938). "The Statistical Utilization of Multiple Measurements". Annals of Eugenics. 8 (4): 376–386. doi:10.1111/j.1469-1809.1938.tb02189.x. hdl:2440/15232.
  3. ^ a b Gnanadesikan, R. (1977) Methods for Statistical Data Analysis of Multivariate Observations, Wiley. ISBN 0-471-30845-5 (p. 83–86)
  4. ^ Rao, C.R. (1952) Advanced Statistical Methods in Multivariate Analysis, Wiley. (Section 9c)
  5. ^ Anderson, T.W. (1958) An Introduction to Multivariate Statistical Analysis, Wiley.
  6. ^ Binder, D. A. (1978). "Bayesian cluster analysis". Biometrika. 65: 31–38. doi:10.1093/biomet/65.1.31.
  7. ^ Binder, David A. (1981). "Approximations to Bayesian clustering rules". Biometrika. 68: 275–285. doi:10.1093/biomet/68.1.275.
  8. ^ Har-Peled, S., Roth, D., Zimak, D. (2003) "Constraint Classification for Multiclass Classification and Ranking." In: Becker, B., Thrun, S., Obermayer, K. (Eds) Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, MIT Press. ISBN 0-262-02550-7
  9. ^ "A Tour of The Top 10 Algorithms for Machine Learning Newbies". Built In. 2018-01-20. Retrieved 2019-06-10.
-