LoginSignup
0
0

More than 3 years have passed since last update.

人工知能

Last updated at Posted at 2019-05-30

人工智能

1.机器学习

1-1.模型评估与选择

1-1-1.评估方法

  • 最一般的方法是行列法,其指标有如下4种:
  • 准确度(Accuracy)
  • 精度(Precision)
  • 再现率(Recall)
  • F值(F-measure)
机器学习算法 准确度 精度 再现率 F值
SVC 0.974 0.97 0.97 0.97
KNeighbors 0.947 0.95 0.95 0.95
LogisticRegression 0.982 0.98 0.98 0.98
RandomForest 0.956 0.96 0.96 0.96
GradientBoosting 0.965 0.97 0.97 0.97
MLP 0.965 0.97 0.97 0.97

参考

Python
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score

#乳腺癌的诊断数据的读取
dataset = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data',header=None)
#分割数据到目标变量和解释变量
X = dataset.loc[:, 2:].values
y = dataset.loc[:, 1].values

#把类别校准变量做成连续变量
le = LabelEncoder()
y = le.fit_transform(y)
#分成训练数据和测试数据
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=1)

#使用Pipline通过数据规模转换和机器学习算法进行模型构建准备
pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state=1))])
pipe_knn = Pipeline([('scl', StandardScaler()), ('clf', KNeighborsClassifier(n_neighbors=10))])
pipe_logistic = Pipeline([('scl', StandardScaler()), ('clf', LogisticRegression())])
pipe_rf = Pipeline([('scl', StandardScaler()), ('clf', RandomForestClassifier(random_state=1))])
pipe_gb = Pipeline([('scl', StandardScaler()), ('clf', GradientBoostingClassifier(random_state=1))])
pipe_mlp = Pipeline([('scl', StandardScaler()), ('clf', MLPClassifier(hidden_layer_sizes=(5,2), max_iter=500, random_state=1))])
pipe_names = ['SVC', 'KNeighbors', 'LogisticRegression', 'RandomForest', 'GradientBoosting', 'MLP']
pipe_lines = [pipe_svc, pipe_knn, pipe_logistic, pipe_rf, pipe_gb, pipe_mlp]

#通过反复训练、由所述混淆矩阵的机器学习算法,准确率,准确率,召回,并输出F值
for (i, pipe) in enumerate(pipe_lines):
    #让模型学习
    pipe.fit(X_train, y_train)
    #用学习好的模型,对测试数据进行恶性肿瘤或良性肿瘤的预测
    y_pred = pipe.predict(X_test)

    #实际的类别和预测得出的类别,进行比较做成行列表
    confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)

    fig, ax = plt.subplots(figsize = (5, 5))

    #使用matshow函数来映射关于从混淆矩阵在测试数据集中生成的分类器的各种错误分类的信息
    ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.3)
    for j in range(confmat.shape[0]):#クラス0の折り返し処理
        for k in range(confmat.shape[1]):#クラス1の繰り返し処理
            ax.text(x=k, y=j, s=confmat[j, k], va='center', ha='center', fontsize=25)
    #显示件数
    #画图
    plt.title(pipe_names[i], fontsize=20)
    plt.xlabel('predicted label', fontsize=17)
    plt.ylabel('true label', fontsize=17)
    plt.show()
    #输出每种机器学习算法的准确度
    print(f"{pipe_names[i]} accuracy : {accuracy_score(y_test, y_pred):.3f}")
    print()
    #输出精度,再现率,F值以及样本的合计数
    print(classification_report(y_test, y_pred, target_names=["良性", "恶性"]))
    print('_'*40)

1-1-2.留出法,自助法

1-1-3.交叉验证

1-1-4.性能度量

1-1-5.错误率与精度

1-1-6.假设检验

1-1-7.偏差与方差

1-2.训练集与测试集

1-2-1.训练集

1-2-2.测试集

1-3.监督学习

1-3-1.线性回归

1-3-2.多元线性回归

1-3-3.逻辑回归

1-3-4.过拟合

1-3-5.KNN,SVN

1-3-6.朴素贝叶斯

1-3-7.树模型

1-4.非监督学习

1-4-1.非监督学习介绍

1-4-2.聚类概述

1-4-3.K-means

1-4-4.层次聚类

1-4-5.密度聚类

1-4-6.PCA,SVD

1-4-7.流型学习

1-4-8.EM和GMM

1-4-9.异常值检测

1-4.强化学习

1-4-1.强化学习分类

1-4-2.马尔科夫决策过程

1-4-3.动态规划

1-4-4.时间差分算法

1-5.决策树

1-5-1.决策树介绍

1-5-2.基本流程

1-5-3.连续与缺失值

1-5-4.多变量决策树

1-6.神经网络

1-6-1.神经元

1-6-2.感知机与多层网络

1-6-3.RBF网络

1-6-4.ART网络

1-6-5.SQM网络

1-6-6.Elman网络

2.深度学习(https://nndl.github.io/)

2-1.深度学习基础

2-1-1.线性模型

2-1-2.NLP介绍

2-1-3.逻辑回归

2-2.人工神经网络

2-2-1.BP算法

2-2-2.梯度下降算法

2-3.神经网络的训练与使用

2-3-1.梯度检查,过拟合,正则化

2-3-2.神经网络包

2-3-3.最优化包

2-4.卷积网络

2-4-1.卷积网络介绍

2-4-2.卷积网络操作

2-4-3.图像目标分类

2-4-4.递归神经网络

2-5.循环神经网络

2-5-1.RNN神经网络

2-5-2.LSTM神经网络

2-5-3.文本情感分析运用

2-6.递归神经网络

2-6-1.介绍

2-6-2.语言模型

2-7.曼德布洛特

2-7-1.基本步骤

2-7-2.会话与变量

2-7-3.定义并运行计算

2-8.偏微分方程

2-8-1.基本设置

2-8-2.计算函数

2-8-3.偏微分方程

2-9.其他深度网络结构

2-9-1.生成式对抗网络

2-9-2.变分自编码器

NLP

python
def distinct_words:
    corpus_words = sorted(set(np.concatenate(corpus)))
    num_corpus_words = len(corpus_words)

    word2Ind = dict(zip(words, range(len(words))))
    M = np.zeros((len(words),len(words)), dtype=float)

def compute_co_occurrence_matrix(corpus, window_size=4):
    # 遍历语料库的所有句子
    for row,array in  enumerate(corpus):
        # 遍历每个句子的每个单词并把这个单词作为焦点词
        for col,focus_word in enumerate(array):
            #根据窗口大小,向右取相应单词
            max_cnt = col + window_size + 1
            if(max_cnt > len(array)):
                max_cnt = len(array)
            # 根据窗口大小,向左取相应单词
            min_cnt = col - window_size
            if(min_cnt < 0):
                min_cnt = 0
            # 统计焦点词右侧的单词
            for right in range(col+1,max_cnt):
                m_row = word2Ind[focus_word]
                m_col = word2Ind[array[right]]

                M[m_row,m_col] = M[m_row,m_col] + 1
            # 统计焦点词左侧侧的单词
            for left in range(min_cnt,col):
                m_row = word2Ind[focus_word]
                m_col = word2Ind[array[left]]

                M[m_row,m_col] = M[m_row,m_col] + 1

DL note

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0