• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python vader.SentimentIntensityAnalyzer类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中nltk.sentiment.vader.SentimentIntensityAnalyzer的典型用法代码示例。如果您正苦于以下问题:Python SentimentIntensityAnalyzer类的具体用法?Python SentimentIntensityAnalyzer怎么用?Python SentimentIntensityAnalyzer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



在下文中一共展示了SentimentIntensityAnalyzer类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: on_data

 def on_data(self, raw_data):
     tweet = loads(raw_data)
     try:
         text = tweet['text']
         if tweet.get('retweeted_status') is None and 'RT @' not in text:
             if tweet.get('coordinates') is None:
                 # TODO: Check for rate limit. If rate limited, then perform location inference
                 nouns = self._get_nouns(tweet_text=text)
                 # bf = BilateralFriends(user_id=tweet['user']['id'], twitter_api=self.api)
                 # loc_occurrence_count = bf.get_location_occurrence()
                 tweet_nouns = defaultdict(int)
                 for noun in nouns:
                     tweet_nouns[noun] += 1
                 self.corpus[tweet['user']['id']] = {'id': tweet['user']['id'],
                                                     'location': tweet['user']['location'],
                                                     # 'bilateral_friends_location_occurrences': loc_occurrence_count,
                                                     'text_nouns': tweet_nouns}
                 loc_inf = LocationInference(user=self.corpus[tweet['user']['id']], local_words=self.local_words,
                                             geo_words=self.geo_words)
                 inferred_location = loc_inf.get_location()
                 print inferred_location
                 print 'Predicted location:', inferred_location[0]
                 tweet['coordinates'] = {'type': 'Point', 'coordinates': [LOCATIONS[inferred_location[0]][1],
                                                                          LOCATIONS[inferred_location[0]][0]]}
                 print tweet['coordinates']
             sentiment_analyzer = SentimentIntensityAnalyzer()
             sentiment_score = sentiment_analyzer.polarity_scores(text=text)['compound']
             tweet['sentiment'] = sentiment_score
             current_time_ms = int(round(time() * 1000))
             tweet['time_inserted'] = current_time_ms
             print text, ': ', str(sentiment_score)
             STREAM_BUFFER.insert(tweet)
     except KeyError, v:
         print 'KeyError: ', v
开发者ID:bdeloeste,项目名称:lima,代码行数:34,代码来源:streamhandler.py


示例2: analyze

def analyze(posts):
  post_json = setup_json()
  #for post, replies in posts.iteritems()

  sid = SentimentIntensityAnalyzer()
  for key, value in posts.iteritems():
    nustring = ' '.join(value[0]).replace("u'", "")
    ss = sid.polarity_scores(nustring)
    for k in sorted(ss):
      if k is "compound":
        entry = {}
        entry['name'] = int(ss[k]*len(nustring))
        entry['size'] = len(nustring)
        if ss[k] == 0.0:
          post_json['children'][1]['children'].append(entry)
        elif ss[k] < -0.8:
          post_json['children'][2]['children'][2]['children'].append(entry)
        elif ss[k] < -0.4:
          post_json['children'][2]['children'][1]['children'].append(entry)
        elif ss[k] < -0.0:
          post_json['children'][2]['children'][0]['children'].append(entry)
        elif ss[k] < 0.4:
          post_json['children'][0]['children'][0]['children'].append(entry)
        elif ss[k] < 0.8:
          post_json['children'][0]['children'][1]['children'].append(entry)
        else:
          post_json['children'][0]['children'][2]['children'].append(entry)
  return post_json
开发者ID:EvanJRichter,项目名称:coolgraph,代码行数:28,代码来源:import_data.py


示例3: add_sentiment_to_comments

def add_sentiment_to_comments():
    sia = SentimentIntensityAnalyzer()
    for story_comment_list in comments.values():
        for comment in story_comment_list:
            if "text" in comment:
                comment["sentiment"] = sia.polarity_scores(comment["text"])
            print(comment) # here's where to add sentiment using nltk to text
开发者ID:davecom,项目名称:HNSentiment,代码行数:7,代码来源:hnsentiment.py


示例4: nltk_sentiment

def nltk_sentiment(tweets):
	sentiment = []
	sid = SentimentIntensityAnalyzer()
	for tweet in tweets:
		st = sid.polarity_scores(tweet)
		sentiment.append(st['compound'])
	return sentiment
开发者ID:shusnain,项目名称:Tweet-Mining,代码行数:7,代码来源:tweets_sentiment.py


示例5: add_sentiment

 def add_sentiment(self):
     print 'Adding sentiment...',
     sia = SentimentIntensityAnalyzer()
     for sentiment in ('pos', 'neg', 'neu', 'compound'):
         sentify = lambda s: sia.polarity_scores(s[:200])[sentiment]
         self.df['sentiment_' + sentiment] = self.df['story body'].apply(sentify)
     print 'done'
开发者ID:jperelshteyn,项目名称:tr_challenge,代码行数:7,代码来源:features.py


示例6: main

def main():
    parser = argparse.ArgumentParser(description="Reads in output from " +
        "downloadGroupmeMessages and runs a sentiment analysis")
    parser.add_argument("inFile", help="The file containing the stored messages")
    parser.add_argument("--outFile", default="out.txt", help="Results go here")
    args = parser.parse_args()
    
    print("\nThis program prints the most negative and positive users of the chat ranked according to their average score from the VADER sentiment intensity analyzer in the NLTK. Not super accurate, but it's a fun conversation starter")
    print("The program takes a few seconds to run, and requires that you have some of the NLTK corpora installed.")

    with open(args.inFile, 'r') as infile:
        infile.readline()
        analyzer = SentimentIntensityAnalyzer()
        negList = []
        positiveList = []
        counter = PostSentimentCounter()
        for line in infile:
            line = line.split('\t')
            message = line[3]
            id = line[0]
            name = line[1]
            
            sentDict = analyzer.polarity_scores(message)
            counter.countPost(id, name, sentDict)
        counter.printSentimentLeaderboards()
开发者ID:ben-heil,项目名称:GroupmeScripts,代码行数:25,代码来源:sentimentSorter.py


示例7: analyze_sentiment_vader_lexicon

def analyze_sentiment_vader_lexicon(review, 
                                    threshold=0.1,
                                    verbose=False):
    # pre-process text
    review = normalize_accented_characters(review)
    review = html_parser.unescape(review)
    review = strip_html(review)
    # analyze the sentiment for review
    analyzer = SentimentIntensityAnalyzer()
    scores = analyzer.polarity_scores(review)
    # get aggregate scores and final sentiment
    agg_score = scores['compound']
    final_sentiment = 'positive' if agg_score >= threshold\
                                   else 'negative'
    if verbose:
        # display detailed sentiment statistics
        positive = str(round(scores['pos'], 2)*100)+'%'
        final = round(agg_score, 2)
        negative = str(round(scores['neg'], 2)*100)+'%'
        neutral = str(round(scores['neu'], 2)*100)+'%'
        sentiment_frame = pd.DataFrame([[final_sentiment, final, positive,
                                        negative, neutral]],
                                        columns=pd.MultiIndex(levels=[['SENTIMENT STATS:'], 
                                                                      ['Predicted Sentiment', 'Polarity Score',
                                                                       'Positive', 'Negative',
                                                                       'Neutral']], 
                                                              labels=[[0,0,0,0,0],[0,1,2,3,4]]))
        print sentiment_frame
    
    return final_sentiment
开发者ID:000Nelson000,项目名称:text-analytics-with-python,代码行数:30,代码来源:sentiment_analysis_unsupervised_lexical.py


示例8: sentiment_analytis_text

    def sentiment_analytis_text(self,text_insert):

        text = text_insert

        token_text = tokenize.sent_tokenize(text)
        sid = SentimentIntensityAnalyzer()

        over_all_sentiment = 0
        count = 0

        for sentence in token_text:
            score = sid.polarity_scores(sentence)
            # Create over all sentiment score
            over_all_sentiment += score.get("compound")

            # If sentence is not neuteral add to sentence count for average
            if (score.get("compound") >  0.1):
                count += 1

        # Calculate average sentiment
        if count > 0:
            average_sentiment = over_all_sentiment/count
        else:
            average_sentiment = over_all_sentiment

        return average_sentiment
开发者ID:c-okelly,项目名称:movie_script_analytics,代码行数:26,代码来源:text_objects.py


示例9: get_tweets

def get_tweets(q, today):
    r = api.request(
        "search/tweets", {"q": "%s since:%s" % (q, today), "count": "100", "result_type": "recent", "lang": "en"}
    )
    data = (json.loads(r.text))["statuses"]
    sid = SentimentIntensityAnalyzer()
    all_tweets = []
    for i in range(0, len(data)):
        text = data[i]["text"].encode("ascii", "ignore").decode("ascii")
        if "RT" in text:
            RT = True
        else:
            RT = False
        others = text.count("@")
        sent = TextBlob(text)
        valance = sent.sentiment.polarity
        NLTK = sid.polarity_scores(text)
        tweet_data = {
            "tweetID": data[i]["id"],
            "created_at": data[i]["created_at"],
            "text": text,
            "textblob": valance,
            "NLTK": NLTK["compound"],
            "RT": RT,
            "others": others,
        }
        # print(data[i])
        all_tweets.append(tweet_data)
    return all_tweets
开发者ID:chaimsalzer,项目名称:tweet-analysis,代码行数:29,代码来源:tweet.py


示例10: get_unique_tweets

 def get_unique_tweets(self, data_dict):
     # TODO: Implement filter to check if Tweet text starts with 'RT'
     """
     :param data_dict:
     :return:
     """
     flag = False
     try:
         text = data_dict['text'].encode('ascii', 'ignore').lower()
         # Check for 'retweeted_status' in metadata field to determine
         # if tweet is a retweet (1st check)
         if 'retweeted_status' not in data_dict:
             url_match = URL.match(text)
             # Check if link contains url
             if url_match:
                 match_group = url_match.group()
                 if len(self.key_list) > 0:
                     if any(match_group in item for item in self.key_list):
                         flag = True
                     if flag is False:
                         data_dict['text'] = match_group
                         print "Inserted text: " + data_dict['text'] + '\n'
                         self.key_list.append(match_group)
                         sid = SentimentIntensityAnalyzer()
                         ss = sid.polarity_scores(text)
                         print ss['compound']
                         score = ss['compound']
                         if score < 0:
                             score += (3 * score)
                         for w in GOOGLE:
                             if w in text and self.google_price >= 0:
                                 self.google_price = score
                                 self.google_text = text
                         for w in MICROSOFT:
                             if w in text and self.microsoft_price >= 0:
                                 self.microsoft_price = score
                                 self.microsoft_text = text
                         for w in FACEBOOK:
                             if w in text and self.facebook_price >= 0:
                                 self.facebook_price = score
                                 self.facebook_text = text
                         p.trigger('test_channel', 'my_event',
                                   {'google': self.google_price,
                                    'microsoft': self.microsoft_price,
                                    'facebook': self.facebook_price})
                         p.trigger('tweet_channel', 'my_event',
                                   {
                                       'google_text': self.google_text,
                                       'microsoft_text': self.microsoft_text,
                                       'facebook_text' : self.facebook_text
                                   })
                         self.google_price = 0
                         self.microsoft_price = 0
                         self.facebook_price = 0
                 else:
                     self.key_list.append(url_match.group())
     except TypeError, e:
         print >> sys.stderr, e
         self.log_error(str(e))
开发者ID:bdeloeste,项目名称:hackrice-stockapp,代码行数:59,代码来源:stream.py


示例11: sentiment_by_subreddit

def sentiment_by_subreddit():
	phrase = urllib.quote(request.form["text"])
	year = urllib.quote(request.form["year"])

	sid = SentimentIntensityAnalyzer()

	year_str = str(year)
	if int(year) > 2014:
		year_str += "_01"

	query = '''SELECT subreddit, body, score FROM
	(SELECT subreddit, body, score, RAND() AS r1
	FROM [fh-bigquery:reddit_comments.''' + year_str + ''']
	WHERE REGEXP_MATCH(body, r'(?i:''' + phrase + ''')')
	AND subreddit IN (SELECT subreddit FROM (SELECT subreddit, count(*) AS c1 FROM [fh-bigquery:reddit_comments.''' + year_str + '''] WHERE REGEXP_MATCH(body, r'(?i:'''+phrase+''')') AND score > 1 GROUP BY subreddit ORDER BY c1 DESC LIMIT 10))
	ORDER BY r1
	LIMIT 5000)
	'''
	bigquery_service = build('bigquery', 'v2', credentials=credentials)
	try:
		query_request = bigquery_service.jobs()
		query_data = {
			'query': query,
			'timeoutMs': 30000
		}

		query_response = query_request.query(
			projectId=bigquery_pid,
			body=query_data).execute()

	except HttpError as err:
		print('Error: {}'.format(err.content))
		raise err
	
	subreddit_sentiments = defaultdict(list)
	subreddit_total = defaultdict(int)
	
	if 'rows' in query_response:
		rows = query_response['rows']
		sentiments = []
		for row in rows:
			subreddit = row['f'][0]['v']
			body = row['f'][1]['v']
			score = int(row['f'][2]['v'])
			sentiment_values = []
			
			lines_list = tokenize.sent_tokenize(body)
			for sentence in lines_list:
				if phrase.upper() in sentence.upper():#(regex.search(sentence)):
					s = sid.polarity_scores(sentence)
					sentiment_values.append(s['compound'])
		
			comment_sentiment = float(sum(sentiment_values))/len(sentiment_values)
			subreddit_sentiments[subreddit].append((comment_sentiment, score))
			subreddit_total[subreddit] += int(score)

	subreddit_sentiments = {subreddit:1 + float(sum([float(pair[0])*float(pair[1]) for pair in sentiment_list]))/subreddit_total[subreddit] for subreddit, sentiment_list in subreddit_sentiments.items()}
	result = sorted(subreddit_sentiments.items(), key = lambda(k,v): (-v,k))
	return json.dumps(result)
开发者ID:xytosis,项目名称:cs1951a_flask_code,代码行数:59,代码来源:start.py


示例12: vader

    def vader(self):
        sid = SentimentIntensityAnalyzer()
        results = {'neg': 0.0, 'pos': 0.0, 'neu': 0.0, 'compound': 0.0}
        ss = sid.polarity_scores(self.text)
        for k in sorted(ss):
            results[k] += ss[k]

        return results
开发者ID:EmanuelaMollova,项目名称:CreeperPP,代码行数:8,代码来源:preprocessor.py


示例13: avg_message_sentiment_helper

 def avg_message_sentiment_helper(self, message):
     sentences = tokenize.sent_tokenize(message)
     sid = SentimentIntensityAnalyzer()
     sentence_sentiments = []
     for sentence in sentences:
         ss = sid.polarity_scores(sentence)
         sentence_sentiments.append(ss['compound'])
     return np.mean(sentence_sentiments)
开发者ID:drlevy,项目名称:ECS251-Project,代码行数:8,代码来源:SentimentProcessor.py


示例14: computeVaderScore

    def computeVaderScore(self,sentence):
        sid = SentimentIntensityAnalyzer()
        ss = sid.polarity_scores(sentence)
        retList = []
        for k in sorted(ss):
            retList.append(ss[k])

        return retList
开发者ID:thanospappas,项目名称:ESl,代码行数:8,代码来源:CompoundSentiment.py


示例15: sentimentScore

def sentimentScore(sentences):
	analyzer = SentimentIntensityAnalyzer()
	results = []
	for sentence in sentences:
		vs = analyzer.polarity_scores(sentence)
		print("vs: " + str(vs))
		results.append(vs)
	return results
开发者ID:Vaibhav,项目名称:Stock-Analysis,代码行数:8,代码来源:getSentiment.py


示例16: negative_msg_count

    def negative_msg_count(self):
        sid = SentimentIntensityAnalyzer()
        msg_count = 0
        for sentence in self.get_message_lst():
            ss = sid.polarity_scores(sentence)
            if ss['compound'] < 0:
                msg_count += 1

        return msg_count
开发者ID:WilliamHammond,项目名称:fbcanalyzer,代码行数:9,代码来源:ChatStream.py


示例17: get_sentiment_score

def get_sentiment_score(sentence):
    score_dict = {}
    sid = SentimentIntensityAnalyzer()
    ss = sid.polarity_scores(sentence)
    for k in sorted(ss):
        # print('{0}: {1}, '.format(k, ss[k]), end='')
        score_dict[k] = ss[k]

    return score_dict
开发者ID:bommysk,项目名称:Data301FinalProject,代码行数:9,代码来源:multiclassifier.py


示例18: sentiment

def sentiment(sentence):
		sid = SentimentIntensityAnalyzer()
		ss = sid.polarity_scores(sentence)
		if float(ss['neg']) > float(ss['pos']) :
				return -1*float(ss['neg'])
		elif float(ss['neg']) < float(ss['pos']):
			return float(ss['pos'])
		else:
			return 0
开发者ID:yatindandi,项目名称:CodeFunDo,代码行数:9,代码来源:final.py


示例19: process

	def process(self, tup):

		# extract the sentence
		sentence = tup.values[0]  

		sid = SentimentIntensityAnalyzer()
		ss = sid.polarity_scores(sentence)
		tuple_result = (str(ss['neg']),str(ss['pos']),str(ss['neu']))
		self.emit(tuple_result)
开发者ID:yahiaMI,项目名称:Storm,代码行数:9,代码来源:SentimentAnalysisBolt.py


示例20: vader_sentiment_scores

def vader_sentiment_scores(text_array):
    sid = SentimentIntensityAnalyzer()
    assert all([type(t) == type('') for t in text_array])
    vs_dict = {'neg': [], 'neu': [], 'pos': [], 'compound': []}
    for i, text in enumerate(text_array):
        if i % 10000 == 0:
            print(i)
        vs = sid.polarity_scores(text)
        for key, value in vs.items():
            vs_dict[key].append(value)
    return vs_dict
开发者ID:jonasrothfuss,项目名称:equity_news_thesis,代码行数:11,代码来源:bag_of_words_model.py



注:本文中的nltk.sentiment.vader.SentimentIntensityAnalyzer类示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python stem.PorterStemmer类代码示例发布时间:2022-05-27
下一篇:
Python sentiment.SentimentAnalyzer类代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap