• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python metrics.precision_recall_fscore_support函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中sklearn.metrics.precision_recall_fscore_support函数的典型用法代码示例。如果您正苦于以下问题:Python precision_recall_fscore_support函数的具体用法?Python precision_recall_fscore_support怎么用?Python precision_recall_fscore_support使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了precision_recall_fscore_support函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: main

def main():

    do_it = 1

    # get data
    global g_train, g_train_label, g_test, g_test_label, g_feature_name
    g_train, g_train_label, g_test, g_test_label, g_feature_name = load_features_and_labels()

    # do
    if do_it == 0:
        forest = cudaTreeRandomForestClassifier(n_estimators=50, verbose=True, bootstrap=False)
        forest.fit(np.asarray(g_train), np.asarray(g_train_label), bfs_threshold=4196)
        predictions = forest.predict(np.asarray(g_test))
        print precision_recall_fscore_support(g_test_label, predictions, average='micro')

    # do
    if do_it == 0:
        forest = hybridForestRandomForestClassifier(n_estimators=50,
                                                    n_gpus=2,
                                                    n_jobs=6,
                                                    bootstrap=False,
                                                    cpu_classifier=WiseRF)
        forest.fit(np.asarray(g_train), np.asarray(g_train_label), bfs_threshold=4196)
        predictions = forest.predict(np.asarray(g_test))
        print precision_recall_fscore_support(g_test_label, predictions, average='micro')
开发者ID:SpikingNeurons,项目名称:WriterIdentification,代码行数:25,代码来源:gpu_cuda.py


示例2: evaluate_mutiple

def evaluate_mutiple(ground_truth, prediction, find_max=False, f_beta = 1.0, avg_method=None):
    """
    :param ground_truth: 1-d array, e.g. gt: [1, 1, 2, 2, 3]
    :param prediction: 1-d array, e.g. prediction: [1, 1, 2, 2, 4]
    :return: recall, precision, f-value
    """

    prediction_indices = prediction

    if find_max or len(prediction.shape) == 2:
        prediction_indices = find_max_indices(prediction)

    # Find Precision & Recall & F-value
    precision, recall, f_value, support = None, None, None, None

    if len(prediction.shape) == 2:
        M = prediction.shape[1]
        precision, recall, f_value, support \
            = precision_recall_fscore_support(ground_truth,
                                              prediction_indices,
                                              beta=f_beta,
                                              pos_label=M,
                                              average=avg_method)
    else:
        precision, recall, f_value, support \
            = precision_recall_fscore_support(ground_truth,
                                              prediction_indices,
                                              beta=f_beta,
                                              average=avg_method)

    return precision, recall, f_value
开发者ID:corsy,项目名称:evaluators,代码行数:31,代码来源:precision_recall_evaluator.py


示例3: learnCART

    def learnCART(self):
        train_input_data = self.loadData(self.train_file)
        target = [x[1] for x in train_input_data]
        target = target[1:]
        features = [x[2:] for x in train_input_data]
        features = features[1:]
        # feature selection
        #features_new = self.doFeatureSelection(features,target)
        model = self.classify(features,target)

        test_input_data = self.loadData(self.test_file)
        actualOutput = [x[1] for x in test_input_data]
        actualOutput = actualOutput[1:]
        features = [x[2:] for x in test_input_data]
        features = features[1:]

        predictedOutput = model.predict(features)
        #print predictedOutput
        #print actualOutput
        self.computeAccuracy(predictedOutput,actualOutput)
        print "Precision recall Fscore support metrics for CART "
        print precision_recall_fscore_support(actualOutput,predictedOutput)
        print "\nconfusion matrix\n"
        print confusion_matrix(actualOutput,predictedOutput)
        self.printDTRules(model)
        X= []
        Y=[]
        for a in predictedOutput:
            X.append(int(a))
        for a in actualOutput:
            Y.append(int(a))
        self.plotROC(Y,X)
        result = zip(Y,X)
        self.write_To_File(result,"cart-predictions.csv")
开发者ID:satheeshravir,项目名称:mlproject,代码行数:34,代码来源:CartDecisionTreeAlgorithm.py


示例4: compare_dummy

	def compare_dummy(self):
		""" Compares classifier to dummy classifiers"""
		#print "\nDetailed classification report:\n"
		#print "The model is trained on the full development set.\n"
		#print "The scores are computed on the full evaluation set.\n"

		X_train = self.train_vectors
		y_train = self.train_tweetclasses
		X_test = self.test_vectors
		y_test = self.test_tweetclasses

		dummy = DummyClassifier(strategy='most_frequent',random_state=0)
		dummy.fit(X_train, y_train)
		y_true, y_preddum = y_test, dummy.predict(X_test)
		tuples = precision_recall_fscore_support(y_true, y_preddum)

		dummy1 = DummyClassifier(strategy='stratified',random_state=0)
		dummy1.fit(X_train, y_train)
		y_true, y_preddum1 = y_test, dummy1.predict(X_test)
		tuples1 = precision_recall_fscore_support(y_true, y_preddum1)

		dummy2 = DummyClassifier(strategy='uniform',random_state=0)
		dummy2.fit(X_train, y_train)
		y_true, y_preddum2 = y_test, dummy2.predict(X_test)
		tuples2 = precision_recall_fscore_support(y_true, y_preddum2)

		return (tuples, tuples1,tuples2)
开发者ID:sagieske,项目名称:scriptie,代码行数:27,代码来源:classification.py


示例5: compute_precision_recall_accuracy_thresholded_v3

def compute_precision_recall_accuracy_thresholded_v3(threshold=0.5, sim_column=4):
        """
        starting from 0 the sims (for supp-v2) are:
            soundex
        	nysiis
        	metaphone
        :param threshold:
        :param sim_column:
        :return:
        """
        tab_strings, tab_values, ground_truth = read_in_sim_data_as_table()
        y_true = list()
        y_pred = list()
        num_pos = 0
        output_pos = 0
        for i in range(len(tab_values)):
            y_true.append(ground_truth[i])
            if ground_truth[i] == 1:
                num_pos += 1
            if tab_values[i][sim_column] >= threshold:
                y_pred.append(1)
                output_pos += 1
            else:
                y_pred.append(0)
        # precision, recall, thresholds = precision_recall_curve(np.array(y_true), np.array(y_pred))
        print 'printing precision, recall, fscore, support:',
        print precision_recall_fscore_support(np.array(y_true), np.array(y_pred), average='binary')
        print 'accuracy score: ', accuracy_score(np.array(y_true), np.array(y_pred))
        # print precision
        # print recall
        print 'number of positives output: ',
        print num_pos
        print 'number of positives in ground truth: ',
        print output_pos
开发者ID:mayankkejriwal,项目名称:pycharm-projects-ubuntu,代码行数:34,代码来源:name-matching-analysis.py


示例6: compute_precision_recall_accuracy_thresholded_v2

def compute_precision_recall_accuracy_thresholded_v2(threshold=0.5, sim_column=4):
    """
    starting from 0 the sims (for supp-v2) are:
        tri_gram_jaccard_similarity
    	jaro_winkler_similarity
    	levenshtein_similarity
    	needleman_wunsch_similarity
    	metaphone_similarity
    :param threshold:
    :param sim_column:
    :return:
    """
    tab_strings, tab_values, ground_truth = read_in_sim_data_as_table()
    y_true = list()
    y_pred = list()
    num_pos = 0
    output_pos = 0
    for i in range(len(tab_values)):
        y_true.append(ground_truth[i])
        if ground_truth[i] == 1:
            num_pos += 1
        if tab_values[i][sim_column] >= threshold:
            y_pred.append(1)
            output_pos += 1
        else:
            y_pred.append(0)
    # precision, recall, thresholds = precision_recall_curve(np.array(y_true), np.array(y_pred))
    print precision_recall_fscore_support(np.array(y_true), np.array(y_pred), average='binary')
    print 'accuracy score: ',accuracy_score(np.array(y_true), np.array(y_pred))
    # print precision
    # print recall
    print num_pos
    print output_pos
开发者ID:mayankkejriwal,项目名称:pycharm-projects-ubuntu,代码行数:33,代码来源:name-matching-analysis.py


示例7: compare_dummy_classification

    def compare_dummy_classification(self):
        """ Compares classifier to dummy classifiers. Return results (resultscores_tuple, N.A., N.A.)"""
        X_train = self.train_vectors
        y_train = self.train_tweetclasses
        X_test = self.test_vectors
        y_test = self.test_tweetclasses

        dummy_results = []

        dummy = DummyClassifier(strategy="most_frequent", random_state=0)
        dummy.fit(X_train, y_train)
        y_true, y_preddum = y_test, dummy.predict(X_test)
        tuples = precision_recall_fscore_support(y_true, y_preddum)

        dummy1 = DummyClassifier(strategy="stratified", random_state=0)
        dummy1.fit(X_train, y_train)
        y_true, y_preddum1 = y_test, dummy1.predict(X_test)
        tuples1 = precision_recall_fscore_support(y_true, y_preddum1)

        dummy2 = DummyClassifier(strategy="uniform", random_state=0)
        dummy2.fit(X_train, y_train)
        y_true, y_preddum2 = y_test, dummy2.predict(X_test)
        tuples2 = precision_recall_fscore_support(y_true, y_preddum2)

        resulttuple = ("dummy freq", "N.A.", "N.A.", "N.A.", "N.A.", tuples)
        resulttuple1 = ("dummy strat", "N.A.", "N.A.", "N.A.", "N.A.", tuples1)
        resulttuple2 = ("dummy uni", "N.A.", "N.A.", "N.A.", "N.A.", tuples2)

        dummy_results.append(resulttuple)
        dummy_results.append(resulttuple1)
        dummy_results.append(resulttuple2)

        return dummy_results
开发者ID:sagieske,项目名称:scriptie,代码行数:33,代码来源:classification2.py


示例8: test_precision_recall_f1_score_with_an_empty_prediction

def test_precision_recall_f1_score_with_an_empty_prediction():
    y_true = np.array([[0, 1, 0, 0], [1, 0, 0, 0], [0, 1, 1, 0]])
    y_pred = np.array([[0, 0, 0, 0], [0, 0, 0, 1], [0, 1, 1, 0]])

    # true_pos = [ 0.  1.  1.  0.]
    # false_pos = [ 0.  0.  0.  1.]
    # false_neg = [ 1.  1.  0.  0.]
    p, r, f, s = precision_recall_fscore_support(y_true, y_pred,
                                                 average=None)
    assert_array_almost_equal(p, [0.0, 1.0, 1.0, 0.0], 2)
    assert_array_almost_equal(r, [0.0, 0.5, 1.0, 0.0], 2)
    assert_array_almost_equal(f, [0.0, 1 / 1.5, 1, 0.0], 2)
    assert_array_almost_equal(s, [1, 2, 1, 0], 2)

    f2 = fbeta_score(y_true, y_pred, beta=2, average=None)
    support = s
    assert_array_almost_equal(f2, [0, 0.55, 1, 0], 2)

    p, r, f, s = precision_recall_fscore_support(y_true, y_pred,
                                                 average="macro")
    assert_almost_equal(p, 0.5)
    assert_almost_equal(r, 1.5 / 4)
    assert_almost_equal(f, 2.5 / (4 * 1.5))
    assert_equal(s, None)
    assert_almost_equal(fbeta_score(y_true, y_pred, beta=2,
                                    average="macro"),
                        np.mean(f2))

    p, r, f, s = precision_recall_fscore_support(y_true, y_pred,
                                                 average="micro")
    assert_almost_equal(p, 2 / 3)
    assert_almost_equal(r, 0.5)
    assert_almost_equal(f, 2 / 3 / (2 / 3 + 0.5))
    assert_equal(s, None)
    assert_almost_equal(fbeta_score(y_true, y_pred, beta=2,
                                    average="micro"),
                        (1 + 4) * p * r / (4 * p + r))

    p, r, f, s = precision_recall_fscore_support(y_true, y_pred,
                                                 average="weighted")
    assert_almost_equal(p, 3 / 4)
    assert_almost_equal(r, 0.5)
    assert_almost_equal(f, (2 / 1.5 + 1) / 4)
    assert_equal(s, None)
    assert_almost_equal(fbeta_score(y_true, y_pred, beta=2,
                                    average="weighted"),
                        np.average(f2, weights=support))

    p, r, f, s = precision_recall_fscore_support(y_true, y_pred,
                                                 average="samples")
    # |h(x_i) inter y_i | = [0, 0, 2]
    # |y_i| = [1, 1, 2]
    # |h(x_i)| = [0, 1, 2]
    assert_almost_equal(p, 1 / 3)
    assert_almost_equal(r, 1 / 3)
    assert_almost_equal(f, 1 / 3)
    assert_equal(s, None)
    assert_almost_equal(fbeta_score(y_true, y_pred, beta=2,
                                    average="samples"),
                        0.333, 2)
开发者ID:chrisburr,项目名称:scikit-learn,代码行数:60,代码来源:test_classification.py


示例9: _update_metrics

    def _update_metrics(self, y_true, y_pred,
                        onco_prob, tsg_prob):
        # record which genes were predicted what
        self.driver_gene_pred = pd.Series(y_pred, self.y.index)
        self.driver_gene_score = pd.Series(onco_prob+tsg_prob, self.y.index)

        # evaluate performance
        prec, recall, fscore, support = metrics.precision_recall_fscore_support(y_true, y_pred,
                                                                                average='macro')
        cancer_gene_pred = ((onco_prob + tsg_prob)>.5).astype(int)
        self.cancer_gene_count[self.num_pred] = np.sum(cancer_gene_pred)
        self.precision[self.num_pred] = prec
        self.recall[self.num_pred] = recall
        self.f1_score[self.num_pred] = fscore

        # compute Precision-Recall curve metrics
        driver_prob = onco_prob + tsg_prob
        driver_true = (y_true > 0).astype(int)
        p, r, thresh = metrics.precision_recall_curve(driver_true, driver_prob)
        p, r, thresh = p[::-1], r[::-1], thresh[::-1]  # reverse order of results
        thresh = np.insert(thresh, 0, 1.0)
        self.driver_precision_array[self.num_pred, :] = interp(self.driver_recall_array, r, p)
        self.driver_threshold_array[self.num_pred, :] = interp(self.driver_recall_array, r, thresh)

        # calculate prediction summary statistics
        prec, recall, fscore, support = metrics.precision_recall_fscore_support(driver_true, cancer_gene_pred)
        self.driver_precision[self.num_pred] = prec[1]
        self.driver_recall[self.num_pred] = recall[1]

        # save driver metrics
        fpr, tpr, thresholds = metrics.roc_curve(driver_true, driver_prob)
        self.driver_tpr_array[self.num_pred, :] = interp(self.driver_fpr_array, fpr, tpr)
开发者ID:KarchinLab,项目名称:2020plus,代码行数:32,代码来源:generic_classifier.py


示例10: test

    def test(self, a_trees, a_segments):
        """Estimate performance of segmenter model.

        Args:
          a_trees (list): BitPar trees
          a_segments (list): corresponding gold segments for trees

        Returns:
          2-tuple: macro and micro-averaged F-scores

        """
        if self.model is None:
            return (0, 0)
        segments = [self.model.predict(self.featgen(itree))[0]
                    for itree in a_trees]
        a_segments = [str(s) for s in a_segments]
        _, _, macro_f1, _ = precision_recall_fscore_support(a_segments,
                                                            segments,
                                                            average='macro',
                                                            warn_for=())
        _, _, micro_f1, _ = precision_recall_fscore_support(a_segments,
                                                            segments,
                                                            average='micro',
                                                            warn_for=())
        return (macro_f1, micro_f1)
开发者ID:WladimirSidorenko,项目名称:DiscourseSegmenter,代码行数:25,代码来源:bparsegmenter.py


示例11: main

def main():
    model_file = '../../paper/data/srwe_model/wiki_small.w2v.model'
    nytimes_file = '../gen_data/nytimes/news_corpus'
    model = load_w2v_model(model_file, logging, nparray=True)
    corpus_vec, corpus_label = load_nytimes(nytimes_file, model)
    labels = list(set(corpus_label))
    X_train, X_test, y_train, y_test = train_test_split(corpus_vec, corpus_label, test_size=0.2, random_state=42)
    logging.info('train size: %d, test size:%d' % (len(y_train), len(y_test)))
    clfs = {}
    for label in labels:
        clfs[label] = train(label, X_train, X_test, y_train, y_test)

    y_pred = []
    for each in X_test:
        pred_res = []
        for label in clfs:
            pred_res.append((clfs[label].predict_proba(each.reshape(1, -1))[0][1], label))
        sorted_pred = sorted(pred_res, key=lambda x: x[0], reverse=True)
        y_pred.append(sorted_pred[0][1])
    precision, recall, f_score, support, present_labels = precision_recall_fscore_support(y_test, y_pred)
    for l, p, r, f in zip(present_labels, precision, recall, f_score):
        print '%s\t%.4lf\t%.4lf\t%.4lf' % (l, p, r, f)

    precision, recall, f_score, support, present_labels = precision_recall_fscore_support(y_test, y_pred, average='macro')
    print 'Macro\t%.4lf\t%.4lf\t%.4lf' % (precision, recall, f_score)
    precision, recall, f_score, support, present_labels = precision_recall_fscore_support(y_test, y_pred, average='micro')
    print 'Micro\t%.4lf\t%.4lf\t%.4lf' % (precision, recall, f_score)
开发者ID:zbhno37,项目名称:srwe,代码行数:27,代码来源:text_classification.py


示例12: clf_metrics

def clf_metrics(p_train, p_test, y_train, y_test):
    """ Compute metrics on classifier predictions

    Parameters
    ----------
    p_train : np.array [n_samples]
        predicted probabilities for training set
    p_test : np.array [n_samples]
        predicted probabilities for testing set
    y_train : np.array [n_samples]
        Training labels.
    y_test : np.array [n_samples]
        Testing labels.

    Returns
    -------
    clf_scores : dict
        classifier scores for training set
    """
    y_pred_train = 1*(p_train >= 0.5)
    y_pred_test = 1*(p_test >= 0.5)

    train_scores = {}
    test_scores = {}

    train_scores['accuracy'] = metrics.accuracy_score(y_train, y_pred_train)
    test_scores['accuracy'] = metrics.accuracy_score(y_test, y_pred_test)

    train_scores['mcc'] = metrics.matthews_corrcoef(y_train, y_pred_train)
    test_scores['mcc'] = metrics.matthews_corrcoef(y_test, y_pred_test)

    (p, r, f, s) = metrics.precision_recall_fscore_support(y_train,
                                                           y_pred_train)
    train_scores['precision'] = p
    train_scores['recall'] = r
    train_scores['f1'] = f
    train_scores['support'] = s

    (p, r, f, s) = metrics.precision_recall_fscore_support(y_test,
                                                           y_pred_test)
    test_scores['precision'] = p
    test_scores['recall'] = r
    test_scores['f1'] = f
    test_scores['support'] = s

    train_scores['confusion matrix'] = \
        metrics.confusion_matrix(y_train, y_pred_train, labels=[0, 1])
    test_scores['confusion matrix'] = \
        metrics.confusion_matrix(y_test, y_pred_test, labels=[0, 1])

    train_scores['auc score'] = \
        metrics.roc_auc_score(y_train, p_train + 1, average='weighted')
    test_scores['auc score'] = \
        metrics.roc_auc_score(y_test, p_test + 1, average='weighted')

    clf_scores = {'train': train_scores, 'test': test_scores}

    return clf_scores
开发者ID:EQ4,项目名称:contour_classification,代码行数:58,代码来源:clf_utils.py


示例13: melodiness_metrics

def melodiness_metrics(m_train, m_test, y_train, y_test):
    """ Compute metrics on melodiness score

    Parameters
    ----------
    m_train : np.array [n_samples]
        melodiness scores for training set
    m_test : np.array [n_samples]
        melodiness scores for testing set
    y_train : np.array [n_samples]
        Training labels.
    y_test : np.array [n_samples]
        Testing labels.

    Returns
    -------
    melodiness_scores : dict
        melodiness scores for training set
    """
    m_bin_train = 1*(m_train >= 1)
    m_bin_test = 1*(m_test >= 1)

    train_scores = {}
    test_scores = {}

    train_scores['accuracy'] = metrics.accuracy_score(y_train, m_bin_train)
    test_scores['accuracy'] = metrics.accuracy_score(y_test, m_bin_test)

    train_scores['mcc'] = metrics.matthews_corrcoef(y_train, m_bin_train)
    test_scores['mcc'] = metrics.matthews_corrcoef(y_test, m_bin_test)

    (p, r, f, s) = metrics.precision_recall_fscore_support(y_train,
                                                           m_bin_train)
    train_scores['precision'] = p
    train_scores['recall'] = r
    train_scores['f1'] = f
    train_scores['support'] = s

    (p, r, f, s) = metrics.precision_recall_fscore_support(y_test,
                                                           m_bin_test)
    test_scores['precision'] = p
    test_scores['recall'] = r
    test_scores['f1'] = f
    test_scores['support'] = s

    train_scores['confusion matrix'] = \
        metrics.confusion_matrix(y_train, m_bin_train, labels=[0, 1])
    test_scores['confusion matrix'] = \
        metrics.confusion_matrix(y_test, m_bin_test, labels=[0, 1])

    train_scores['auc score'] = \
        metrics.roc_auc_score(y_train, m_train + 1, average='weighted')
    test_scores['auc score'] = \
        metrics.roc_auc_score(y_test, m_test + 1, average='weighted')

    melodiness_scores = {'train': train_scores, 'test': test_scores}

    return melodiness_scores
开发者ID:EQ4,项目名称:contour_classification,代码行数:58,代码来源:mv_gaussian.py


示例14: metric

    def metric(self,tag,rank):
        
        precision,recall,fbeta,support  =   precision_recall_fscore_support(self.purchase,tag)
        print "precision of purchase:",precision
        print "recall of purchase:",recall

        PAN,a,b,c                      =   precision_recall_fscore_support(self.rating,rank)
        
        print "P @ N :",PAN
开发者ID:chsu16,项目名称:recommender-system,代码行数:9,代码来源:ranking.py


示例15: eval_models

def eval_models(stream1, stream2, predictor1, predictor2):
    source1 = multiplex_streams([stream1, stream2], [0.5, 0.5], 1000)
    source2 = multiplex_streams([stream1, stream2], [0.1, 0.9], 1000)
    for source in source1, source2:
        data = source.next()
        y_est1 = predictor1(data['x']).values()[0].argmax(axis=1)
        y_est2 = predictor2(data['x']).values()[0].argmax(axis=1)
        print metrics.precision_recall_fscore_support(data['y'], y_est1)
        print metrics.precision_recall_fscore_support(data['y'], y_est2)
开发者ID:agangzz,项目名称:dl4mir,代码行数:9,代码来源:sample_bias.py


示例16: get_scores

def get_scores(y_pred, y_true):
    scores = precision_recall_fscore_support(y_true=y_true, y_pred=y_pred, labels=[0,1])
    average = precision_recall_fscore_support(y_true=y_true,
                                              y_pred=y_pred,
                                              average="macro",
                                              pos_label=None,
                                              labels=[0, 1])

    return scores, average
开发者ID:TatsuyukiIju,项目名称:data_projection,代码行数:9,代码来源:test.py


示例17: NBgauss

def NBgauss(x_train,y_train,x_test,y_test):
    ####Naive Bayes (Gaussian likelihood)
    clf = GaussianNB()
    clf.fit(x_train,y_train)
    predict_y = clf.predict(x_test)
    auc_score=metrics.roc_auc_score(y_test,predict_y)
    print 'GaussianNB auc_score=',auc_score
    print metrics.precision_recall_fscore_support(y_test,predict_y)
    return auc_score
开发者ID:SpongeGourd,项目名称:python,代码行数:9,代码来源:majia.py


示例18: AdaBoost

def AdaBoost(x_train,y_train,x_test,y_test):
    #####AdaBoostClassifier
    clf = AdaBoostClassifier()
    clf.fit(x_train,y_train)
    predict_y = clf.predict(x_test)
    auc_score=metrics.roc_auc_score(y_test,predict_y)
    print 'AdaBoost auc_score=',auc_score
    print metrics.precision_recall_fscore_support(y_test,predict_y)
    return auc_score
开发者ID:SpongeGourd,项目名称:python,代码行数:9,代码来源:majia.py


示例19: RF

def RF(x_train,y_train,x_test,y_test):
    #####RF
    clf = RandomForestClassifier()
    clf.fit(x_train,y_train)
    predict_y = clf.predict(x_test)
    auc_score=metrics.roc_auc_score(y_test,predict_y)
    print 'RF auc_score=',auc_score
    print metrics.precision_recall_fscore_support(y_test,predict_y)
    return auc_score
开发者ID:SpongeGourd,项目名称:python,代码行数:9,代码来源:majia.py


示例20: GBDT

def GBDT(x_train,y_train,x_test,y_test):
    #####GBDT
    clf = GradientBoostingClassifier()
    clf.fit(x_train,y_train)
    predict_y = clf.predict(x_test)
    auc_score=metrics.roc_auc_score(y_test,predict_y)
    print 'GBDT auc_score=',auc_score
    print metrics.precision_recall_fscore_support(y_test,predict_y)
    return auc_score
开发者ID:SpongeGourd,项目名称:python,代码行数:9,代码来源:majia.py



注:本文中的sklearn.metrics.precision_recall_fscore_support函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python metrics.precision_score函数代码示例发布时间:2022-05-27
下一篇:
Python metrics.precision_recall_curve函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap