发布日期: 2017-07-25, 最近更新: 1 年多前

An introduction to machine learning with scikit-learn

Section contents

In this section, we introduce the machine learning vocabulary that we use throughout scikit-learn and give a simple learning example.


在这一章节,我们将介绍机器学习中的 scikit-learn 以及一些学习例子。

Machine learning: the problem setting  机器学习:问题设置

In general, a learning problem considers a set of n samples of data and then tries to predict properties of unknown data. If each sample is more than a single number and, for instance, a multi-dimensional entry (aka multivariate data), it is said to have several attributes or features.


We can separate learning problems in a few large categories: 我们可以把学习问题分为几个类别。

  • supervised      learning, in which the data comes with additional      attributes that we want to predict (Click here to      go to the scikit-learn supervised learning page).This problem can be      either:


  • classification:      samples belong to two or more classes and we want to learn from already      labeled data how to predict the class of unlabeled data. An example of      classification problem would be the handwritten digit recognition example,      in which the aim is to assign each input vector to one of a finite number      of discrete categories. Another way to think of classification is as a      discrete (as opposed to continuous) form of supervised learning where one      has a limited number of categories and for each of the n samples provided,      one is to try to label them with the correct category or class.

分类:样本属于两个或更多个类,我们想从已标记的数据中学习如何预测未标记数据的类别。 分类问题的一个例子是手写数字识别示例,其目的是将每个输入向量分配给有限数目的离散类别之一。 分类的另一种方式是作为监督学习的离散(而不是连续的)形式,其中提供的n个样本中的每一个样本都有一个有限数量的类别,另一方式是尝试用正确的类别或类别来标记它们。

  • regression:  if the desired output consists of one or more continuous variables, then      the task is called regression.  An example of a regression problem would be the prediction of the length  of a salmon as a function of its age and weight.

回归:如果期望的输出由一个或多个连续变量组成,则该任务称为回归。 回归问题的一个例子是鲑鱼年龄和体重的函数预测其长度。

  • unsupervised      learning, in which the training data consists of a set  of input vectors x without any corresponding target values. The goal in  such problems may be to discover groups of similar examples within the  data, where it is called clustering, or to determine the distribution of data within the input space, known as density      estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization (Click here to go to the Scikit-Learn unsupervised learning page).

无监督学习,其中训练数据由一组没有任何相应目标值的输入向量x组成。 这些问题的目标可能是在数据中发现类似示例的组,称为聚类,或者确定输入空间内的数据分布,称为密度估计,或从高维数据投影数据 空间缩小到二维或三维以进行可视化(点击此处转到Scikit-Learn无人值守学习页面)。

Training set and testing set


Machine learning is about learning some properties of a data set and applying them to new data. This is why a common practice in machine learning to evaluate an algorithm is to split the data at hand into two sets, one that we call thetraining set on which we learn data properties and one that we call the testing set on which we test these properties.


Loading an example dataset


scikit-learn comes with a few standard datasets, for instance the iris and digits datasets for classification and the boston house prices dataset for regression.


In the following, we start a Python interpreter from our shell and then load the iris and digits datasets. Our notational convention is that $ denotes the shell prompt while >>> denotes the Python interpreter prompt:


$ python

>>> from sklearn import datasets

>>> iris = datasets.load_iris()

>>> digits = datasets.load_digits()

A dataset is a dictionary-like object that holds all the data and some metadata about the data. This data is stored in the.data member, which is a n_samples, n_features array. In the case of supervised problem, one or more response variables are stored in the .target member. More details on the different datasets can be found in the dedicated section.


For instance, in the case of the digits dataset, digits.data gives access to the features that can be used to classify the digits samples:



>>> print(digits.data)  

[[  0.   0.   5. ...,   0.   0.   0.]

 [  0.   0.   0. ...,  10.   0.   0.]

 [  0.   0.   0. ...,  16.   9.   0.]


 [  0.   0.   1. ...,   6.   0.   0.]

 [  0.   0.   2. ...,  12.   0.   0.]

 [  0.   0.  10. ...,  12.   1.   0.]]

and digits.target gives the ground truth for the digit dataset, that is the number corresponding to each digit image that we are trying to learn:



>>> digits.target

array([0, 1, 2, ..., 8, 9, 8])

Shape of the data arrays


The data is always a 2D array, shape (n_samples, n_features), although the original data may have had a different shape. In the case of the digits, each original sample is an image of shape (8, 8) and can be accessed using:

数据总是2D数组,形状(n_samples,n_features),尽管原始数据可能具有不同的形状。 在数字的情况下,每个原始样本是形状(8,8)的图像,可以使用以下方式访问:


>>> digits.images[0]

array([[  0.,   0.,   5.,  13.,   9.,   1.,   0.,   0.],

       [  0.,   0.,  13.,  15.,  10.,  15.,   5.,   0.],

       [  0.,   3.,  15.,   2.,   0.,  11.,   8.,   0.],

       [  0.,   4.,  12.,   0.,   0.,   8.,   8.,   0.],

       [  0.,   5.,   8.,   0.,   0.,   9.,   8.,   0.],

       [  0.,   4.,  11.,   0.,   1.,  12.,   7.,   0.],

       [  0.,   2.,  14.,   5.,  10.,  12.,   0.,   0.],

       [  0.,   0.,   6.,  13.,  10.,   0.,   0.,   0.]])

The simple example on this dataset illustrates how starting from the original problem one can shape the data for consumption in scikit-learn.


Loading from external datasets


To load from an external dataset, please refer to loading external datasets.



Learning and predicting


In the case of the digits dataset, the task is to predict, given an image, which digit it represents. We are given samples of each of the 10 possible classes (the digits zero through nine) on which we fit an estimator to be able to predict the classes to which unseen samples belong.


In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T).


An example of an estimator is the class sklearn.svm.SVC that implements support vector classification. The constructor of an estimator takes as arguments the parameters of the model, but for the time being, we will consider the estimator as a black box:



>>> from sklearn import svm

>>> clf = svm.SVC(gamma=0.001, C=100.)

Choosing the parameters of the model


In this example we set the value of gamma manually. It is possible to automatically find good values for the parameters by using tools such as grid search and cross validation.


We call our estimator instance clf, as it is a classifier. It now must be fitted to the model, that is, it must learn from the model. This is done by passing our training set to the fit method. As a training set, let us use all the images of our dataset apart from the last one. We select this training set with the [:-1] Python syntax, which produces a new array that contains all but the last entry of digits.data:

我们称我们的估计器为实例clf,因为它是一个分类器。现在它必须适应模型,也就是说,它必须从模型中学习。这是通过我们的训练集过渡到适合的方法来完成的。作为一个训练集,让我们使用除最后一个数据集的所有图像。我们用[:-1] Python语法选择这个训练集,它产生一个包含除去digits.data的最后一个数据的新数组:


>>> clf.fit(digits.data[:-1], digits.target[:-1])  

SVC(C=100.0, cache_size=200, class_weight=None, coef0=0.0,

  decision_function_shape=None, degree=3, gamma=0.001, kernel='rbf',

  max_iter=-1, probability=False, random_state=None, shrinking=True,

  tol=0.001, verbose=False)

Now you can predict new values, in particular, we can ask to the classifier what is the digit of our last image in the digitsdataset, which we have not used to train the classifier:



>>> clf.predict(digits.data[-1:])


The corresponding image is the following:



As you can see, it is a challenging task: the images are of poor resolution. Do you agree with the classifier?

A complete example of this classification problem is available as an example that you can run and study: Recognizing hand-written digits.




Model persistence


It is possible to save a model in the scikit by using Python’s built-in persistence model, namely pickle:



>>> from sklearn import svm

>>> from sklearn import datasets

>>> clf = svm.SVC()

>>> iris = datasets.load_iris()

>>> X, y = iris.data, iris.target

>>> clf.fit(X, y)  

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,

  decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',

  max_iter=-1, probability=False, random_state=None, shrinking=True,

  tol=0.001, verbose=False)

>>> import pickle

>>> = pickle.dumps(clf)

>>> clf2 = pickle.loads(s)

>>> clf2.predict(X[0:1])


>>> y[0]


In the specific case of the scikit, it may be more interesting to use joblib’s replacement of pickle (joblib.dump &joblib.load), which is more efficient on big data, but can only pickle to the disk and not to a string:



>>> from sklearn.externals import joblib

>>> joblib.dump(clf, 'filename.pkl'

Later you can load back the pickled model (possibly in another Python process) with:



>>> clf = joblib.load('filename.pkl'

Note 注意

joblib.dump and joblib.load functions also accept file-like object instead of filenames. More information on data persistence with Joblib is available here.

Note that pickle has some security and maintainability issues. Please refer to section Model persistence for more detailed information about model persistence with scikit-learn.

joblib.dump和joblib.load函数也接受类似文件的对象而不是文件名。 有关Joblib数据持久性的更多信息,请点击 here

请注意,pickle有一些安全性和可维护性问题。 有关使用scikit-learn的模型持久性的更多详细信息,请参阅 Model persistence




scikit-learn estimators follow certain rules to make their behavior more predictive.


Type casting


Unless otherwise specified, input will be cast to float64:



>>> import numpy as np

>>> from sklearn import random_projection

>>> rng = np.random.RandomState(0)

>>> = rng.rand(102000)

>>> = np.array(X, dtype='float32')

>>> X.dtype


>>> transformer = random_projection.GaussianRandomProjection()

>>> X_new = transformer.fit_transform(X)

>>> X_new.dtype


In this example, X is float32, which is cast to float64 by fit_transform(X).

Regression targets are cast to float64, classification targets are maintained:




>>> from sklearn import datasets

>>> from sklearn.svm import SVC

>>> iris = datasets.load_iris()

>>> clf = SVC()

>>> clf.fit(iris.data, iris.target)  

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,

  decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',

  max_iter=-1, probability=False, random_state=None, shrinking=True,

  tol=0.001, verbose=False)

>>> list(clf.predict(iris.data[:3]))

[0, 0, 0]

>>> clf.fit(iris.data, iris.target_names[iris.target])  

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,

  decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',

  max_iter=-1, probability=False, random_state=None, shrinking=True,

  tol=0.001, verbose=False)

>>> list(clf.predict(iris.data[:3]))  

['setosa', 'setosa', 'setosa']

Here, the first predict() returns an integer array, since iris.target (an integer array) was used in fit. The secondpredict() returns a string array, since iris.target_names was for fitting.

这里,第一个predict()返回一个整数数组,因为使用了iris.target(一个整数数组)。 Secondpredict()返回一个字符串数组,因为iris.target_names是用于拟合的。

Refitting and updating parameters


Hyper-parameters of an estimator can be updated after it has been constructed via thesklearn.pipeline.Pipeline.set_params method. Calling fit() more than once will overwrite what was learned by any previous fit():

估计器的超参数可以在通过sklearn.pipeline.Pipeline.set_params方法构建后进行更新。 多次调用fit()将覆盖以前的fit()中学到的内容:


>>> import numpy as np

>>> from sklearn.svm import SVC

>>> rng = np.random.RandomState(0)

>>> = rng.rand(10010)

>>> = rng.binomial(10.5100)

>>> X_test = rng.rand(510)

>>> clf = SVC()

>>> clf.set_params(kernel='linear').fit(X, y)  

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,

  decision_function_shape=None, degree=3, gamma='auto', kernel='linear',

  max_iter=-1, probability=False, random_state=None, shrinking=True,

  tol=0.001, verbose=False)

>>> clf.predict(X_test)

array([1, 0, 1, 1, 0])

>>> clf.set_params(kernel='rbf').fit(X, y)  

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,

  decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',

  max_iter=-1, probability=False, random_state=None, shrinking=True,

  tol=0.001, verbose=False)

>>> clf.predict(X_test)

array([0, 0, 0, 1, 0])

Here, the default kernel rbf is first changed to linear after the estimator has been constructed via SVC(), and changed back to rbf to refit the estimator and to make a second prediction.


Multiclass vs. multilabel fitting


When using multiclass classifiers, the learning and prediction task that is performed is dependent on the format of the target data fit upon:



>>> from sklearn.svm import SVC

>>> from sklearn.multiclass import OneVsRestClassifier

>>> from sklearn.preprocessing import LabelBinarizer

>>> = [[12], [24], [45], [32], [31]]

>>> = [00112]

>>> classif = OneVsRestClassifier(estimator=SVC(random_state=0))

>>> classif.fit(X, y).predict(X)

array([0, 0, 1, 1, 2])

In the above case, the classifier is fit on a 1d array of multiclass labels and the predict() method therefore provides corresponding multiclass predictions. It is also possible to fit upon a 2d array of binary label indicators:

在上述情况下,分类器适合于一个多类标签的1d阵列,因此,()方法提供了相应的多类预测。 还可以使用二进制标签指示器的二维数组:


>>> = LabelBinarizer().fit_transform(y)

>>> classif.fit(X, y).predict(X)

array([[1, 0, 0],

       [1, 0, 0],

       [0, 1, 0],

       [0, 0, 0],

       [0, 0, 0]])

Here, the classifier is fit() on a 2d binary label representation of y, using the LabelBinarizer. In this casepredict() returns a 2d array representing the corresponding multilabel predictions.

Note that the fourth and fifth instances returned all zeroes, indicating that they matched none of the three labels fit upon. With multilabel outputs, it is similarly possible for an instance to be assigned multiple labels:

这里,分类器是使用LabelBinarizer对y的2d二进制标签表示进行fit()。 在这个casepredict()中返回一个表示相应的多重标签预测的2d数组。

请注意,第四和第五个实例返回所有零,表示它们与三个标签不匹配。 对于多标签输出,类似地可以为实例分配多个标签:

>> from sklearn.preprocessing import MultiLabelBinarizer

>> y = [[01], [02], [13], [023], [24]]

>> y = preprocessing.MultiLabelBinarizer().fit_transform(y)

>> classif.fit(X, y).predict(X)






In this case, the classifier is fit upon instances each assigned multiple labels. The MultiLabelBinarizer is used to binarize the 2d array of multilabels to fit upon. As a result, predict() returns a 2d array with multiple predicted labels for each instance.

在这种情况下,分类器适合每个分配多个标签的实例。 MultiLabelBinarizer用于二值化二维数组的多边形以适应。 因此,predict()会为每个实例返回具有多个预测标签的2d数组。


来自 <http://scikit-learn.org/stable/tutorial/basic/tutorial.html