Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
307 views
in Technique[技术] by (71.8m points)

python - How to add a feature to a vectorized data set?

I want to write a Naive Base text classificator. Because sklearn does not accept 'text form' features I am transforming them using TfidfVectorizer.

I was successfully able to create such classificatory using only the transformed data as features. The code looks like this:

### text vectorization--go from strings to lists of numbers
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
                         stop_words='english')

X_train_transformed = vectorizer.fit_transform(X_train_raw['url'])
X_test_transformed  = vectorizer.transform(X_test_raw['url'])

### feature selection, because text is super high dimensional and 
### can be really computationally chewy as a result
selector = SelectPercentile(f_classif, percentile=1)
selector.fit(X_train_transformed, y_train_raw)

X_train = selector.transform(X_train_transformed).toarray()
X_test  = selector.transform(X_test_transformed).toarray()

clf = GaussianNB()
clf.fit(X_train, y_train_raw)
.....

Everything works as intended but I am having problems when I want to add another feature eg. flag indicating weather the given text contains a certain keyword. I tried multiple things to properly transform the 'url' feature and then combine the transformed feature with another boolean feature but I was unsuccessfully. Any tips how it should be done assuming that I have a pandas frame containing two features: 'url' (which I want to transform) and 'contains_keyword' flag?

The solution which failed looks like this:

vectorizer = CountVectorizer(min_df=1)
X_train_transformed = vectorizer.fit_transform(X_train_raw['url'])
X_test_transformed  = vectorizer.transform(X_test_raw['url'])
selector = SelectPercentile(f_classif, percentile=1)
selector.fit(X_train_transformed, y_train_raw)

X_train_selected = selector.transform(X_train_transformed)
X_test_selected  = selector.transform(X_test_transformed)

X_train_raw['transformed_url'] = X_train_selected.toarray().tolist()
X_train_without = X_train_raw.drop(['url'], axis=1)
X_train = X_train_without.values

This produces rows containing a boolean flag and a list which is a wrong input for sklearn model. I have no idea how should i properly transform this. Grateful for any help.

Here are test data:

url,target,ads_keyword
googleadapis l google com,1,True
googleadapis l google com,1,True
clients1 google com,1,False
c go-mpulse net,1,False
translate google pl,1,False

url - splitted domain taken from dns query

target - target class for classification

ads_keyword - flag indicating weather the 'url' contains the 'ads' word.

I want to transform the 'url' using the TfidfVectorizer and use the transformed data together with 'ads_keyword' (and possibly more features in the future) as features used to train the Naive Bayes model.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Here is a demo, showing how to union features and how to tune up hyperparameters using GridSearchCV.

Unfortunately your sample data set is too tiny to train a real model...

try:
    from pathlib import Path
except ImportError:             # Python 2
    from pathlib2 import Path
import os
import re
from pprint import pprint
import pandas as pd
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import FunctionTransformer, LabelEncoder, LabelBinarizer, StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectPercentile
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import SGDClassifier
from sklearn.naive_bayes import MultinomialNB, GaussianNB
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.externals import joblib
from scipy.sparse import csr_matrix, hstack


class ColumnSelector(BaseEstimator, TransformerMixin):

    def __init__(self, name=None, position=None,
                 as_cat_codes=False, sparse=False):
        self.name = name
        self.position = position
        self.as_cat_codes = as_cat_codes
        self.sparse = sparse

    def fit(self, X, y=None):
        return self

    def transform(self, X, **kwargs):
        if self.name is not None:
            col_pos = X.columns.get_loc(self.name)
        elif self.position is not None:
            col_pos = self.position
        else:
            raise Exception('either [name] or [position] parameter must be not-None')
        if self.as_cat_codes and X.dtypes.iloc[col_pos] == 'category':
                ret = X.iloc[:, col_pos].cat.codes
        else:
            ret = X.iloc[:, col_pos]
        if self.sparse:
            ret = csr_matrix(ret.values.reshape(-1,1))
        return ret

union = FeatureUnion([
            ('text', 
             Pipeline([
                ('select', ColumnSelector('url')),
                #('pct', SelectPercentile(percentile=1)),
                ('vect', TfidfVectorizer(sublinear_tf=True, max_df=0.5,
                                         stop_words='english')),
             ]) ),
            ('ads',
             Pipeline([
                ('select', ColumnSelector('ads_keyword', sparse=True,
                                          as_cat_codes=True)),
                #('scale', StandardScaler(with_mean=False)),
             ]) )
        ])

pipe = Pipeline([
    ('union', union),
    ('clf', MultinomialNB())
])

param_grid = [
    {
        'union__text__vect': [TfidfVectorizer(sublinear_tf=True,
                                              max_df=0.5,
                                              stop_words='english')],
        'clf': [SGDClassifier(max_iter=500)],
        'union__text__vect__ngram_range': [(1,1), (2,5)],
        'union__text__vect__analyzer': ['word','char_wb'],
        'clf__alpha': np.logspace(-5, 0, 6),
        #'clf__max_iter': [500],
    },
    {
        'union__text__vect': [TfidfVectorizer(sublinear_tf=True,
                                              max_df=0.5,
                                              stop_words='english')],
        'clf': [MultinomialNB()],
        'union__text__vect__ngram_range': [(1,1), (2,5)],
        'union__text__vect__analyzer': ['word','char_wb'],
        'clf__alpha': np.logspace(-4, 2, 7),
    },
    #{        # NOTE: does NOT support sparse matrices!
    #    'union__text__vect': [TfidfVectorizer(sublinear_tf=True,
    #                                          max_df=0.5,
    #                                          stop_words='english')],
    #    'clf': [GaussianNB()],
    #    'union__text__vect__ngram_range': [(1,1), (2,5)],
    #    'union__text__vect__analyzer': ['word','char_wb'],
    #},
]

gs_kwargs = dict(scoring='roc_auc', cv=3, n_jobs=1, verbose=2)
X_train, X_test, y_train, y_test = 
    train_test_split(df[['url','ads_keyword']], df['target'], test_size=0.33)
grid = GridSearchCV(pipe, param_grid=param_grid, **gs_kwargs)
grid.fit(X_train, y_train)

# prediction
predicted = grid.predict(X_test)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...