• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python punkt.PunktLanguageVars类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中nltk.tokenize.punkt.PunktLanguageVars的典型用法代码示例。如果您正苦于以下问题:Python PunktLanguageVars类的具体用法?Python PunktLanguageVars怎么用?Python PunktLanguageVars使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



在下文中一共展示了PunktLanguageVars类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: tokenize

 def tokenize(self, string):
     """Tokenize incoming string."""
     punkt = PunktLanguageVars()
     generic_tokens = punkt.word_tokenize(string)
     # Rewrite as an if-else block for exceptions rather than separate list comprehensions
     generic_tokens = [x for item in generic_tokens for x in ([item] if item != 'nec' else ['c', 'ne'])] # Handle 'nec' as a special case.
     generic_tokens = [x for item in generic_tokens for x in ([item] if item != 'sodes' else ['si', 'audes'])] # Handle 'sodes' as a special case.
     generic_tokens = [x for item in generic_tokens for x in ([item] if item != 'sultis' else ['si', 'vultis'])] # Handle 'sultis' as a special case.        
     specific_tokens = []
     for generic_token in generic_tokens:
         is_enclitic = False
         if generic_token not in self.exceptions:
             for enclitic in self.enclitics:
                 if generic_token.endswith(enclitic):
                     if enclitic == 'cum':
                         if generic_token in self.inclusions:
                             specific_tokens += [enclitic] + [generic_token[:-len(enclitic)]]
                         else:
                             specific_tokens += [generic_token]                                                                         
                     elif enclitic == 'st':
                         if generic_token.endswith('ust'):
                             specific_tokens += [generic_token[:-len(enclitic)+1]] + ['est']
                         else:
                             # Does not handle 'similist', 'qualist', etc. correctly
                             specific_tokens += [generic_token[:-len(enclitic)]] + ['est']
                     else:
                         specific_tokens += [enclitic] + [generic_token[:-len(enclitic)]]
                     is_enclitic = True
                     break
         if not is_enclitic:
             specific_tokens.append(generic_token)
     return specific_tokens
开发者ID:vipul-sharma20,项目名称:cltk,代码行数:32,代码来源:word.py


示例2: nltk_tokenize_words

def nltk_tokenize_words(string, attached_period=False, language=None):
    """Wrap NLTK's tokenizer PunktLanguageVars(), but make final period
    its own token.
    >>> nltk_punkt("Sentence 1. Sentence 2.")
    >>> ['Sentence', 'one', '.', 'Sentence', 'two', '.']

    Optionally keep the NLTK's output:
    >>> nltk_punkt("Sentence 1. Sentence 2.", attached_period=True)
    >>> ['Sentence', 'one.', 'Sentence', 'two.']

    TODO: Run some tests to determine whether there is a large penalty for
    re-calling PunktLanguageVars() for each use of this function. If so, this
    will need to become a class, perhaps inheriting from the PunktLanguageVars
    object. Maybe integrate with WordTokenizer.
    """
    assert isinstance(string, str), "Incoming string must be type str."
    if language=='sanskrit': 
        periods = ['.', 'ред','рее']
    else:
        periods = ['.']
    punkt = PunktLanguageVars()
    tokens = punkt.word_tokenize(string)
    if attached_period:
        return tokens
    new_tokens = []
    for word in tokens:
        for char in periods:
            if word.endswith(char):
                new_tokens.append(word[:-1])
                new_tokens.append(char)
                break
        else:
            new_tokens.append(word)
    return new_tokens
开发者ID:Sarthak30,项目名称:cltk,代码行数:34,代码来源:word.py


示例3: tokenize

 def tokenize(self, string):
     """Tokenize incoming string."""
     #punkt = WhitespaceTokenizer()
     punkt= PunktLanguageVars()
     generic_tokens = punkt.word_tokenize(string)
     generic_tokens = [x for item in generic_tokens for x in ([item] if item != 'nec' else ['c', 'ne'])] # Handle 'nec' as a special case.
     specific_tokens = []
     for generic_token in generic_tokens:
         is_enclitic = False
         if generic_token not in self.exceptions:
             for enclitic in self.enclitics:
                 if generic_token.endswith(enclitic):
                     if enclitic == 'cum':
                         if generic_token in self.inclusions:
                             specific_tokens += [enclitic] + [generic_token[:-len(enclitic)]]
                         else:
                             specific_tokens += [generic_token]                                                                         
                     elif enclitic == 'st':
                         if generic_token.endswith('ust'):
                             specific_tokens += [generic_token[:-len(enclitic)+1]] + ['est']
                         else:
                             # Does not handle 'similist', 'qualist', etc. correctly
                             specific_tokens += [generic_token[:-len(enclitic)]] + ['est']
                     else:
                         specific_tokens += [enclitic] + [generic_token[:-len(enclitic)]]
                     is_enclitic = True
                     break
         if not is_enclitic:
             specific_tokens.append(generic_token)
     #return iter(specific_tokens) #change this one into an iterator.
     startPoint=0 #this is to accumulate the start point.
     for item in specific_tokens:
         itemLength=len(item)
         yield item, startPoint, startPoint+itemLength
         startPoint=startPoint+itemLength+1
开发者ID:oudalab,项目名称:sqlite-fts-python,代码行数:35,代码来源:pdb_test_oulatin.py


示例4: tokenize

 def tokenize(self, string):
     """Tokenize incoming string."""
     punkt = PunktLanguageVars()
     generic_tokens = punkt.word_tokenize(string)
     generic_tokens = [x for item in generic_tokens for x in ([item] if item != 'nec' else ['c', 'ne'])] # Handle 'nec' as a special case.
     specific_tokens = []
     for generic_token in generic_tokens:
         is_enclitic = False
         if generic_token not in self.exceptions:
             for enclitic in self.enclitics:
                 if generic_token.endswith(enclitic):
                     if enclitic == 'mst':
                         specific_tokens += [generic_token[:-len(enclitic)+1]] + ['e'+ generic_token[-len(enclitic)+1:]]
                     elif enclitic == 'cum':
                         if generic_token in self.inclusions:
                             specific_tokens += [enclitic] + [generic_token[:-len(enclitic)]]
                         else:
                             specific_tokens += [generic_token]                                                     
                     else:
                         specific_tokens += [enclitic] + [generic_token[:-len(enclitic)]]
                     is_enclitic = True
                     break
         if not is_enclitic:
             specific_tokens.append(generic_token)
     return specific_tokens
开发者ID:ManviG,项目名称:cltk,代码行数:25,代码来源:word.py


示例5: _tokenize

    def _tokenize(self, text):
        """
        Use NLTK's standard tokenizer, rm punctuation.
        :param text: pre-processed text
        :return: tokenized text
        :rtype : list
        """
        sentence_tokenizer = TokenizeSentence('latin')
        sentences = sentence_tokenizer.tokenize_sentences(text.lower())

        sent_words = []
        punkt = PunktLanguageVars()
        for sentence in sentences:
            words = punkt.word_tokenize(sentence)

            assert isinstance(words, list)
            words_new = []
            for word in words:
                if word not in self.punctuation or self.abbreviations or self.numbers or self.abbreviations:  # pylint: disable=line-too-long
                    words_new.append(word)

            # rm all numbers here with: re.compose(r'[09]')
            sent_words.append(words_new)

        return sent_words
开发者ID:Akirato,项目名称:cltk,代码行数:25,代码来源:scanner.py


示例6: tokenize

def tokenize(doc):
    '''
    INPUT: Document
    OUTPUT: Tokenized and stemmed list of words from the document 
    '''
    plv      = PunktLanguageVars()
    snowball = SnowballStemmer('english')
    return [snowball.stem(word) for word in plv.word_tokenize(doc.lower())]
开发者ID:jonoleson,项目名称:PriceMyRental,代码行数:8,代码来源:featurize.py


示例7: tokenize

def tokenize(desc):
	'''
	INPUT: List of cleaned descriptions
	OUTPUT: Tokenized and stemmed list of words from the descriptions 
	'''
	plv = PunktLanguageVars()
	snowball = SnowballStemmer('english')
	return [snowball.stem(word) for word in plv.word_tokenize(desc.lower())]
开发者ID:nhu2000,项目名称:PriceHome,代码行数:8,代码来源:featurize.py


示例8: tag_ner

def tag_ner(lang, input_text, output_type=list):
    """Run NER for chosen language.
    """

    _check_latest_data(lang)

    assert lang in NER_DICT.keys(), \
        'Invalid language. Choose from: {}'.format(', '.join(NER_DICT.keys()))
    types = [str, list]
    assert type(input_text) in types, 'Input must be: {}.'.format(', '.join(types))
    assert output_type in types, 'Output must be a {}.'.format(', '.join(types))

    if type(input_text) == str:
        punkt = PunktLanguageVars()
        tokens = punkt.word_tokenize(input_text)
        new_tokens = []
        for word in tokens:
            if word.endswith('.'):
                new_tokens.append(word[:-1])
                new_tokens.append('.')
            else:
                new_tokens.append(word)
        input_text = new_tokens

    ner_file_path = os.path.expanduser(NER_DICT[lang])
    with open(ner_file_path) as file_open:
        ner_str = file_open.read()
    ner_list = ner_str.split('\n')

    ner_tuple_list = []
    for count, word_token in enumerate(input_text):
        match = False
        for ner_word in ner_list:
            # the replacer slows things down, but is necessary
            if word_token == ner_word:
                ner_tuple = (word_token, 'Entity')
                ner_tuple_list.append(ner_tuple)
                match = True
                break
        if not match:
            ner_tuple_list.append((word_token,))

    if output_type is str:
        string = ''
        for tup in ner_tuple_list:
            start_space = ' '
            final_space = ''
            # this is some mediocre string reconstitution
            # maybe not worth the effort
            if tup[0] in [',', '.', ';', ':', '?', '!']:
                start_space = ''
            if len(tup) == 2:
                string += start_space + tup[0] + '/' + tup[1] + final_space
            else:
                string += start_space + tup[0] + final_space
        return string

    return ner_tuple_list
开发者ID:cltk,项目名称:cltk,代码行数:58,代码来源:ner.py


示例9: test_latin_stopwords

 def test_latin_stopwords(self):
     """Test filtering Latin stopwords."""
     sentence = 'Quo usque tandem abutere, Catilina, patientia nostra?'
     lowered = sentence.lower()
     punkt = PunktLanguageVars()
     tokens = punkt.word_tokenize(lowered)
     no_stops = [w for w in tokens if w not in LATIN_STOPS]
     target_list = ['usque', 'tandem', 'abutere', ',', 'catilina', ',',
                    'patientia', 'nostra', '?']
     self.assertEqual(no_stops, target_list)
开发者ID:cltk,项目名称:cltk,代码行数:10,代码来源:test_stop.py


示例10: test_french_stopwords

 def test_french_stopwords(self):
     ##test filtering French stopwords
     sentence = "En pensé ai e en talant que d ’ Yonec vus die avant dunt il fu nez, e de sun pere cum il vint primes a sa mere ."
     lowered = sentence.lower()
     punkt = PunktLanguageVars()
     tokens = punkt.word_tokenize(lowered)
     no_stops = [w for w in tokens if w not in FRENCH_STOPS]
     target_list = ['pensé', 'talant', 'd', '’', 'yonec', 'die', 'avant', 'dunt', 'nez', ',', 'pere', 'cum', 'primes',
                    'mere','.']
     self.assertEqual(no_stops, target_list)
开发者ID:cltk,项目名称:cltk,代码行数:10,代码来源:test_stop.py


示例11: test_old_norse_stopwords

 def test_old_norse_stopwords(self):
     """
     Test filtering Old Norse stopwords
     Sentence extracted from Eiríks saga rauða (http://www.heimskringla.no/wiki/Eir%C3%ADks_saga_rau%C3%B0a)
     """
     sentence = 'Þat var einn morgin, er þeir Karlsefni sá fyrir ofan rjóðrit flekk nökkurn, sem glitraði við þeim'
     lowered = sentence.lower()
     punkt = PunktLanguageVars()
     tokens = punkt.word_tokenize(lowered)
     no_stops = [w for w in tokens if w not in OLD_NORSE_STOPS]
     target_list = ['var', 'einn', 'morgin', ',', 'karlsefni', 'rjóðrit', 'flekk', 'nökkurn', ',', 'glitraði']
     self.assertEqual(no_stops, target_list)
开发者ID:cltk,项目名称:cltk,代码行数:12,代码来源:test_stop.py


示例12: test_greek_stopwords

 def test_greek_stopwords(self):
     """Test filtering Greek stopwords."""
     sentence = 'Ἅρπαγος δὲ καταστρεψάμενος Ἰωνίην ἐποιέετο στρατηίην \
     ἐπὶ Κᾶρας καὶ Καυνίους καὶ Λυκίους, ἅμα ἀγόμενος καὶ Ἴωνας καὶ \
     Αἰολέας.'
     lowered = sentence.lower()
     punkt = PunktLanguageVars()
     tokens = punkt.word_tokenize(lowered)
     no_stops = [w for w in tokens if w not in GREEK_STOPS]
     target_list = ['ἅρπαγος', 'καταστρεψάμενος', 'ἰωνίην', 'ἐποιέετο',
                    'στρατηίην', 'κᾶρας', 'καυνίους', 'λυκίους', ',',
                    'ἅμα', 'ἀγόμενος', 'ἴωνας', 'αἰολέας.']
     self.assertEqual(no_stops, target_list)
开发者ID:cltk,项目名称:cltk,代码行数:13,代码来源:test_stop.py


示例13: lemmatize

    def lemmatize(self, input_text, return_raw=False, return_string=False):
        """Take incoming string or list of tokens. Lookup done against a
        key-value list of lemmata-headword. If a string, tokenize with
        ``PunktLanguageVars()``. If a final period appears on a token, remove
        it, then re-add once replacement done.
        TODO: rm check for final period, change PunktLanguageVars() to nltk_tokenize_words()
        """
        assert type(input_text) in [list, str], \
            logger.error('Input must be a list or string.')
        if type(input_text) is str:
            punkt = PunktLanguageVars()
            tokens = punkt.word_tokenize(input_text)
        else:
            tokens = input_text

        lemmatized_tokens = []
        for token in tokens:
            # check for final period
            final_period = False
            if token[-1] == '.':
                final_period = True
                token = token[:-1]

            # look for token in lemma dict keys
            if token in self.lemmata.keys():
                headword = self.lemmata[token.lower()]

                # re-add final period if rm'd
                if final_period:
                    headword += '.'

                # append to return list
                if not return_raw:
                    lemmatized_tokens.append(headword)
                else:
                    lemmatized_tokens.append(token + '/' + headword)
            # if token not found in lemma-headword list
            else:
                # re-add final period if rm'd
                if final_period:
                    token += '.'

                if not return_raw:
                    lemmatized_tokens.append(token)
                else:
                    lemmatized_tokens.append(token + '/' + token)
        if not return_string:
            return lemmatized_tokens
        elif return_string:
            return ' '.join(lemmatized_tokens)
开发者ID:Akirato,项目名称:cltk,代码行数:50,代码来源:lemma.py


示例14: test_akkadian_stopwords

 def test_akkadian_stopwords(self):
     """
     Test filtering Akkadian stopwrods
     Sentence extracted from the law code of Hammurabi, law 3 (Martha Roth 2nd Edition 1997, Law Collections from
     Mesopotamia and Asia Minor).
     """
     sentence = "šumma awīlum ina dīnim ana šībūt sarrātim ūṣiamma awat iqbû la uktīn šumma dīnum šû dīn napištim awīlum šû iddâk"
     lowered = sentence.lower()
     punkt = PunktLanguageVars()
     tokens = punkt.word_tokenize(lowered)
     no_stops = [w for w in tokens if w not in AKKADIAN_STOPS]
     target_list = ['awīlum', 'dīnim', 'šībūt', 'sarrātim', 'ūṣiamma', 'awat', 'iqbû', 'uktīn', 'dīnum',
                    'dīn', 'napištim', 'awīlum', 'iddâk']
     self.assertEqual(no_stops, target_list)
开发者ID:cltk,项目名称:cltk,代码行数:14,代码来源:test_stop.py


示例15: _build_concordance

    def _build_concordance(self, text_str):
        """
        Inherit or mimic the logic of ConcordanceIndex() at http://www.nltk.org/_modules/nltk/text.html
        and/or ConcordanceSearchView() & SearchCorpus() at https://github.com/nltk/nltk/blob/develop/nltk/app/concordance_app.py
        :param text_string: Text to be turned into a concordance
        :type text_string: str
        :return: list
        """
        p = PunktLanguageVars()
        orig_tokens = p.word_tokenize(text_str)
        c = ConcordanceIndex(orig_tokens)

        #! rm dupes after index, before loop
        tokens = set(orig_tokens)
        tokens = [x for x in tokens if x not in [',', '.', ';', ':', '"', "'", '[', ']']]  # this needs to be changed or rm'ed

        return c.return_concordance_all(tokens)
开发者ID:Akirato,项目名称:cltk,代码行数:17,代码来源:philology.py


示例16: tokenize

    def tokenize(self, string):
        """Tokenize incoming string."""
        punkt = PunktLanguageVars()
        generic_tokens = punkt.word_tokenize(string)
        specific_tokens = []
        for generic_token in generic_tokens:
            is_enclitic = False
            if generic_token not in self.exceptions:
                for enclitic in self.enclitics:
                    if generic_token.endswith(enclitic):
                        new_tokens = [generic_token[:-len(enclitic)]] + ['-' + enclitic]
                        specific_tokens += new_tokens
                        is_enclitic = True
                        break
            if not is_enclitic:
                specific_tokens.append(generic_token)

        return specific_tokens
开发者ID:paolomonella,项目名称:ursus,代码行数:18,代码来源:word.py


示例17: __init__

class Frequency:
    """Methods for making word frequency lists."""

    def __init__(self):
        """Language taken as argument, necessary used when saving word frequencies to
        ``cltk_data/user_data``."""
        self.punkt = PunktLanguageVars()
        self.punctuation = [',', '.', ';', ':', '"', "'", '?', '-', '!', '*', '[', ']', '{', '}']

    def counter_from_str(self, string):
        """Build word frequency list from incoming string."""
        string_list = [chars for chars in string if chars not in self.punctuation]
        string_joined = ''.join(string_list)
        tokens = self.punkt.word_tokenize(string_joined)
        return Counter(tokens)


    def counter_from_corpus(self, corpus):
        """Build word frequency list from one of several available corpora.
        TODO: Make this count iteratively, not all at once
        """
        assert corpus in ['phi5', 'tlg'], \
            "Corpus '{0}' not available. Choose from 'phi5' or 'tlg'.".format(corpus)

        all_strings = self._assemble_corpus_string(corpus=corpus)
        return self.counter_from_str(all_strings)

    def _assemble_corpus_string(self, corpus):
        """Takes a list of filepaths, returns a string containing contents of
        all files."""

        if corpus == 'phi5':
            filepaths = assemble_phi5_author_filepaths()
            file_cleaner = phi5_plaintext_cleanup
        elif corpus == 'tlg':
            filepaths = assemble_tlg_author_filepaths()
            file_cleaner = tlg_plaintext_cleanup

        for filepath in filepaths:
            with open(filepath) as file_open:
                file_read = file_open.read().lower()
            file_clean = file_cleaner(file_read)
            yield file_clean
开发者ID:eamonnbell,项目名称:cltk,代码行数:43,代码来源:frequency.py


示例18: tokenize_latin_words

def tokenize_latin_words(string):
    from cltk.tokenize.latin_exceptions import latin_exceptions

    assert isinstance(string, str), "Incoming string must be type str."

    def matchcase(word):
        # From Python Cookbook
        def replace(m):
            text = m.group()
            if text.isupper():
                return word.upper()
            elif text.islower():
                return word.lower()
            elif text[0].isupper():
                return word.capitalize()
            else:
                return word

        return replace

    replacements = [(r'mecum', 'cum me'),
                    (r'tecum', 'cum te'),
                    (r'secum', 'cum se'),
                    (r'nobiscum', 'cum nobis'),
                    (r'vobiscum', 'cum vobis'),
                    (r'quocum', 'cum quo'),
                    (r'quacum', 'cum qua'),
                    (r'quicum', 'cum qui'),
                    (r'quibuscum', 'cum quibus'),
                    (r'sodes', 'si audes'),
                    (r'satin', 'satis ne'),
                    (r'scin', 'scis ne'),
                    (r'sultis', 'si vultis'),
                    (r'similist', 'similis est'),
                    (r'qualist', 'qualis est')
                    ]

    for replacement in replacements:
        string = re.sub(replacement[0], matchcase(replacement[1]), string, flags=re.IGNORECASE)


    punkt_param = PunktParameters()
    abbreviations = ['c', 'l', 'm', 'p', 'q', 't', 'ti', 'sex', 'a', 'd', 'cn', 'sp', "m'", 'ser', 'ap', 'n', 'v', 'k', 'mam', 'post', 'f', 'oct', 'opet', 'paul', 'pro', 'sert', 'st', 'sta', 'v', 'vol', 'vop']
    punkt_param.abbrev_types = set(abbreviations)
    sent_tokenizer = PunktSentenceTokenizer(punkt_param)

    word_tokenizer = PunktLanguageVars()
    sents = sent_tokenizer.tokenize(string)

    enclitics = ['que', 'n', 'ue', 've', 'st']
    exceptions = enclitics
    exceptions = list(set(exceptions + latin_exceptions))

    tokens = []

    for sent in sents:
        temp_tokens = word_tokenizer.word_tokenize(sent)
        if temp_tokens[0].endswith('ne'):
            if temp_tokens[0].lower() not in exceptions:
                temp = [temp_tokens[0][:-2], '-ne']
                temp_tokens = temp + temp_tokens[1:]

        if temp_tokens[-1].endswith('.'):
            final_word = temp_tokens[-1][:-1]
            del temp_tokens[-1]
            temp_tokens += [final_word, '.']

        for token in temp_tokens:
            tokens.append(token)

    # Break enclitic handling into own function?
    specific_tokens = []

    for token in tokens:
        is_enclitic = False
        if token.lower() not in exceptions:
            for enclitic in enclitics:
                if token.endswith(enclitic):
                    if enclitic == 'n':
                        specific_tokens += [token[:-len(enclitic)]] + ['-ne']
                    elif enclitic == 'st':
                        if token.endswith('ust'):
                            specific_tokens += [token[:-len(enclitic) + 1]] + ['est']
                        else:
                            specific_tokens += [token[:-len(enclitic)]] + ['est']
                    else:
                        specific_tokens += [token[:-len(enclitic)]] + ['-' + enclitic]
                    is_enclitic = True
                    break
        if not is_enclitic:
            specific_tokens.append(token)

    return specific_tokens
开发者ID:mark-keaton,项目名称:cltk,代码行数:93,代码来源:word.py


示例19: tokenize

 def tokenize(self, string):
     """Tokenize incoming string."""
     
     def matchcase(word):
         # From Python Cookbook
         def replace(m):
             text = m.group()
             if text.isupper():
                 return word.upper()
             elif text.islower():
                 return word.lower()
             elif text[0].isupper():
                 return word.capitalize()
             else:
                 return word
         return replace
     
     replacements = [(r'mecum', 'cum me'),
             (r'tecum', 'cum te'),
             (r'secum', 'cum se'),
             (r'nobiscum', 'cum nobis'),
             (r'vobiscum', 'cum vobis'),
             (r'quocum', 'cum quo'),
             (r'quacum', 'cum qua'), 
             (r'quicum', 'cum qui'),
             (r'quibuscum', 'cum quibus'),
             (r'sodes', 'si audes'),
             (r'satin', 'satis ne'),
             (r'scin', 'scis ne'),
             (r'sultis', 'si vultis'),
             (r'similist', 'similis est'),
             (r'qualist', 'qualis est')
             ]
             
     for replacement in replacements:
         string = re.sub(replacement[0], matchcase(replacement[1]), string, flags=re.IGNORECASE)
         
     print(string)
     
     punkt = PunktLanguageVars()
     generic_tokens = punkt.word_tokenize(string)
                 
     specific_tokens = []
     for generic_token in generic_tokens:
         is_enclitic = False
         if generic_token.lower() not in self.exceptions:
             for enclitic in self.enclitics:
                 if generic_token.endswith(enclitic):
                     if enclitic == 'n':
                             specific_tokens += [generic_token[:-len(enclitic)]] + ['-ne']                                                                                                    
                     elif enclitic == 'st':
                         if generic_token.endswith('ust'):
                             specific_tokens += [generic_token[:-len(enclitic)+1]] + ['est']
                         else:
                             specific_tokens += [generic_token[:-len(enclitic)]] + ['est']
                     else:
                         specific_tokens += [generic_token[:-len(enclitic)]] + ['-' + enclitic]
                     is_enclitic = True
                     break
         if not is_enclitic:
             specific_tokens.append(generic_token)
     return specific_tokens
开发者ID:jfaville,项目名称:cltk,代码行数:62,代码来源:word.py


示例20: exec

    for f in docs:
    	#leemos cada documento
    	exec("file = codecs.open(datapath+'{0}.txt','r','utf-8')".format(f))
    	content = file.read()
    	file.close()

    	#convertimos a minusculas
    	content = content.lower()
    	#quitamos numeros y signos de puntuacion para bag of words, bigramas y trigramas
    	toker = RegexpTokenizer(r'\W+|(,.;)+|[0-9]+', gaps=True)
    	nc = toker.tokenize(content)
    	#dejamos solo puntuacion para representacion de signos de puntuacion
    	tokerPunct = RegexpTokenizer(r'[^,.;!?]+', gaps=True)
    	ncPunct = tokerPunct.tokenize(content)

    	p = PunktLanguageVars()
    	ncGreek = p.word_tokenize(content)

    	#quitamos palabras funcionales
    	if language=='english':
    		filtered_words = [w for w in nc if not w in stopwords.words('english')]
    	elif language=='greek':
    		filtered_words = [w for w in ncGreek if not w in STOPS_LIST]

        #creamos un diccionario y contamos los elementos mas comunes para bag of words, bigramas y trigramas
    	contador = Counter(filtered_words)	

    	#obtenemos palabras mas comunes
    	exec("{0}_mc = contador.most_common(num_common)".format(f))
    	exec("{0}_str_bow = []".format(f))
    	exec("{0}_num_bow = []".format(f))
开发者ID:rockdrigoma,项目名称:paulisthatyou,代码行数:31,代码来源:text2vect.py



注:本文中的nltk.tokenize.punkt.PunktLanguageVars类示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python punkt.PunktSentenceTokenizer类代码示例发布时间:2022-05-27
下一篇:
Python tokenize.WordPunctTokenizer类代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap