• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python state_union.raw函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中nltk.corpus.state_union.raw函数的典型用法代码示例。如果您正苦于以下问题:Python raw函数的具体用法?Python raw怎么用?Python raw使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了raw函数的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: POS_tagging

def POS_tagging(corpus):
    train_text = state_union.raw("2005-GWBush.txt")
    sample_text = corpus
    #print(train_text)
    custom_sentence_tokenizer = PunktSentenceTokenizer(train_text)

    # textfile = open("POS_tagged",'w')
    # textfile.write(train_text)
    # textfile.write("\n\n\n\n\n\n\n\n\n\n")
    # print(custom_sentence_tokenizer)

    tokenized = custom_sentence_tokenizer.tokenize(sample_text)
    tuples_list = []
    def process_content():
        try:
            for i in tokenized:
                words = nltk.word_tokenize(i)
                tagged = nltk.pos_tag(words)
                for w in tagged:
                    tuples_list.append(w)
        except Exception as e:
            c=0
            # print(str(e))
    process_content()
    return tuples_list
开发者ID:achuth-noob,项目名称:CHAT-BOT,代码行数:25,代码来源:C_F_testing.py


示例2: main

def main():
    training_text = state_union.raw('2005-GWBush.txt')
    sample_text = state_union.raw('2006-GWBush.txt')
    custom_sent_tokenizer = PunktSentenceTokenizer(training_text)
    tokenized = custom_sent_tokenizer.tokenize(sample_text)

    choice = 0
    while choice < 5:
        choice = input("1 for named_chunks. This provides some information about proper nouns.\n, 2 for process_chunks. This tells you if a noun phrase followed by n adverb occurs., \n3 for proccess content, this just prints stuff, 4 for...")
        if choice == 1:
            named_chunks(text_trained_tokenized(sample_text, training_text))
        elif choice == 2:
            process_chunks(text_trained_tokenized(sample_text, training_text))
        elif choice == 3:
            process_content(text_trained_tokenized(sample_text, training_text))
        elif choice == 4:
            print "try again, bitch!"
开发者ID:EricChristensen,项目名称:Python_Randomness,代码行数:17,代码来源:PosTagging.py


示例3: main

def main(argv):
    print("main")
    # namedEnts = named_ents("Bill went to the White House. He saw the President of the United States. Then he went to O'hare International Airport. He flew to The Democratic Republic of Congo. He will not go back to the White House any time soon. the President of the United States is dissapointed by this.")
    # print(namedEnts)
    f = open("north_korea.txt")
    text = f.read()
    # print(text)
    johnson = state_union.raw("1968-Johnson.txt")
    ent_list = text_ents(johnson)
    ent_freq = nltk.FreqDist(ent_list)
    print(ent_freq.most_common())
    print(ent_freq)
    print(list(ent_freq.values()))
    print(list(ent_freq.keys()))
开发者ID:EricChristensen,项目名称:Python_Randomness,代码行数:14,代码来源:NamedEnt.py


示例4: name_ent_recog

def name_ent_recog(post):
    train_text = state_union.raw("2005-GWBush.txt")
    sample_text = post
    custom_sent_tokenizer = PunktSentenceTokenizer(train_text)
    tokenized = custom_sent_tokenizer.tokenize(sample_text)
    namedEnt = []
    try:
        for i in tokenized:
            words = nltk.word_tokenize(i)
            tagged = nltk.pos_tag(words)
            namedEnt.append(nltk.ne_chunk(tagged))
    except Exception as e:
        print(str(e))
    return namedEnt
开发者ID:achuth-noob,项目名称:CHAT-BOT,代码行数:14,代码来源:join_sub_obj.py


示例5: POS_tagging

def POS_tagging(corpus):
    train_text = state_union.raw("2005-GWBush.txt")
    sample_text = ""
    for i in corpus:
        sample_text = sample_text+i+" "
    tuples_list = []
    def process_content():
        try:
            words = nltk.word_tokenize(sample_text)
            tagged = nltk.pos_tag(words)
            for w in tagged:
                tuples_list.append(w)
        except Exception as e:
            print(str(e))
    process_content()
    return tuples_list
开发者ID:achuth-noob,项目名称:CHAT-BOT,代码行数:16,代码来源:features_deciding.py


示例6: PunktSentenceTokenizer

# -*- coding: utf-8 -*-
"""
Created on Thu Nov 19 09:15:11 2015

@author: nilakant
"""


import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer
#unsupervised tokenizer
train_text = state_union.raw("2006-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")

custom_sent_tokenizer = PunktSentenceTokenizer(train_text)
tokenized = custom_sent_tokenizer.tokenize(sample_text)

def process_content():
    try:
        for i in tokenized:
            words = nltk.word_tokenize(i)
            tagged = nltk.pos_tag(words)
            
            chunkGram = r"""Chunk: {<.*>+}
                                    }<VB.?|IN|DT|TO>+{"""

            chunkParser = nltk.RegexpParser(chunkGram)
            chunked = chunkParser.parse(tagged)
            chunked.draw()
            
开发者ID:MIS407,项目名称:pyFiles,代码行数:30,代码来源:chinkikng.py


示例7: stem_text

from nltk.corpus import state_union
#from nltk.corpus import PunktSentenceTokenizer
from nltk.stem import PorterStemmer     #this give the stem of the word to help “normalize’ text
from nltk.stem import WordNetLemmatizer #this is like stemming, but gives a complete word or synonym
from nltk.corpus import wordnet, movie_reviews #movie_reviews are 1000 positive and 1000 negative movie reviews
import random #this is to randomize the movie reviews as the first 1000 are positive and the other 1000 negative
import pickle





my_text = """The World Wide Web, or simply Web, is a way of accessing information over the medium of the Internet. It is an information-sharing model that is built on top of the Internet. The Web uses the HTTP protocol, only one of the languages spoken over the Internet, to transmit data. Web services, which use HTTP to allow applications to communicate in order to exchange business logic, use the the Web to share information. The Web also utilizes browsers, such as Internet Explorer or Firefox, to access Web documents called Web pages that are linked to each other via hyperlinks. Web documents also contain graphics, sounds, text and video.
The Web is just one of the ways that information can be disseminated over the Internet. The Internet, not the Web, is also used for e-mail, which relies on SMTP, Usenet news groups, instant messaging and FTP. So the Web is just a portion of the Internet, albeit a large portion, but the two terms are not synonymous and should not be confused."""

address = state_union.raw('2006-GWBush.txt')


def stem_text (text):
    """reduces the text to its stems and removes the stop words"""
    tokenized_text = word_tokenize(text)
    #this is a list comp that filters the stopwords from  tokenized text
    stopped_text = [word for word in tokenized_text if word not in stopwords.words('english')] #note english in stopwords
    stemmed_list =[]
    #this give the stem of the word to help “normalize’ text
    ps = PorterStemmer()
    for word in stopped_text:
        x = ps.stem(word)
        stemmed_list.append(x)
    print('text has been stemmed')
    return stemmed_list
开发者ID:emailkgnow,项目名称:nltk_tut,代码行数:31,代码来源:NLTK+Training.py


示例8: PunktSentenceTokenizer

from os import path

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer

import sys
from termcolor import *
import termcolor

import textblob
from textblob import TextBlob
from textblob.translate import Translator

#Training for then identifying verbs, nouns etc
train_text = state_union.raw("2005-GWBush.txt")
custom_sent_tokenizer = PunktSentenceTokenizer(train_text)

#Color Codes corresponding to Tags for Verbs, Nouns etc
TagCodes = {'CC': 6, 'CD': 1, 'DT': 6, 'EX': 6, 'FW': 6, 'IN': 6, 'JJ': 0, 'JJR': 0, 'JJS': 0, 'LS': 2, 'MD': 2, 'NN': 1, 'NNS': 1, 'NNP': 2, 'NNPS': 2, 'PDT': 6, 'POS': 6, 'PRP': 5, 'PRP$': 5, 'RB': 4, 'RBR': 4, 'RBS': 4, 'RP': 4, 'TO': 7, 'UH': 2, 'VB': 3, 'VBD': 3, 'VBG': 3, 'VBN': 3, 'VBP': 3, 'VBZ': 3, 'WDT': 6, 'WP': 5, 'WP$': 5, 'WRB': 5};

ColorCodes = {0: 'grey', 1: 'red', 2: 'green', 3: 'yellow', 4: 'blue', 5: 'magenta', 6: 'cyan', 7: 'white'}

#Each language is assigned a short code for translation
LanguageCodes = {'afrikaans' : 'af','albanian' : 'sq','arabic' : 'ar','armenian' : 'hy','azerbaijani' : 'az','basque' : 'eu','belarusian' : 'be','bengali' :'bn','bosnian' : 'bs','bulgarian' : 'bg','catalan' : 'ca','cebuano' : 'ceb','chichewa' : 'ny','chinese-simplified' : 'zh-CN','chinese-traditional' : 'zh-TW','croatian' : 'hr','czech' : 'cs','danish' : 'da','dutch' : 'nl','english' : 'en','esperanto' : 'eo','estonian' : 'et','filipino' : 'tl','finnish' : 'fi','french' : 'fr','galician' : 'gl','georgian' : 'ka','german' : 'de','greek' : 'el','gujarati' : 'gu','haitian-creole' : 'ht','hausa' : 'ha','hebrew' : 'iw','hindi' : 'hi','hmong' : 'hmn','hungarian' : 'hu','icelandic' : 'is','igbo' : 'ig','indonesian' : 'id','irish' : 'ga','italian' : 'it','japanese' : 'ja','javanese' : 'jw','kannada' :'kn','kazakh' : 'kk','khmer' : 'km','korean' : 'ko','lao' : 'lo','latin' : 'la','latvian' : 'lv','lithuanian' : 'lt','macedonian' : 'mk','malagasy' : 'mg','malay' : 'ms','malayalam' : 'ml','maltese' : 'mt','maori' : 'mi','marathi' : 'mr','mongolian' :'mn','burmese' : 'my','nepali' : 'ne','norwegian' : 'no','persian' : 'fa','polish' : 'pl','portuguese' : 'pt','punjabi' : 'ma','romanian' : 'ro','russian' : 'ru','serbian' : 'sr','sesotho' : 'st','sinhala' : 'si','slovak' : 'sk','slovenian' :'sl','somali' : 'so','spanish' : 'es','sudanese' : 'su','swahili' : 'sw','swedish' : 'sv','tajik' : 'tg','tamil' : 'ta','telugu' : 'te','thai' : 'th','turkish' : 'tr','ukrainian' : 'uk','urdu' : 'ur','uzbek' : 'uz','vietnamese' : 'vi','welsh' : 'cy','yiddish' : 'yi','yoruba' : 'yo','zulu' : 'zu'}


#Tags corresponding to Verbs, Nouns etc
'''
POS tag list:
开发者ID:CodeCorp,项目名称:Utilities,代码行数:30,代码来源:gensubs.py


示例9: chunk

#!/usr/bin/env python

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer #unsupervised tokenizer


train_text = state_union.raw('2005-GWBush.txt')

#print train_text

test_text = state_union.raw('2006-GWBush.txt')

custom_sent_token = PunktSentenceTokenizer(train_text)

tokenized = custom_sent_token.tokenize(test_text)

#print tokenized
#print type(tokenized)

def chunk():
	try:
		for i in tokenized:
			words = nltk.word_tokenize(i)
			tagged = nltk.pos_tag(words)

			regexp = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?} 
								}<VB.?|IN|DT|TO>+{"""

			parser = nltk.RegexpParser(regexp)
开发者ID:allanis79,项目名称:machine-learning,代码行数:30,代码来源:chunkk.py


示例10: PunktSentenceTokenizer

import nltk
from nltk.tokenize import PunktSentenceTokenizer
from nltk.corpus import state_union

train_text = state_union.raw('2005-GWBush.txt')
sample_text = state_union.raw('2006-GWBush.txt')

custom_sent_tokeniser = PunktSentenceTokenizer(train_text)
tokenized = custom_sent_tokeniser.tokenize(sample_text)

def process_content():
	try:
		for i in tokenized:
			words = nltk.word_tokenize(i)
			tagged = nltk.pos_tag(words)
			namedEntity = nltk.ne_chunk(tagged, binary=False)
			namedEntity.draw()
	except Exception as e:
		print str(e)

process_content()
开发者ID:cosmos-sajal,项目名称:NLTK,代码行数:21,代码来源:namedEntityRecognition.py


示例11: buildhtml

def buildhtml(tokenized_sentence, sentence_count):
	html = ""
	starting_div = "<div class=\"panel panel-primary\"> <div class=\"panel-heading\"> Sentence "+ str(sentence_count) +"</div><div class=\"panel-body\">"
	ending_div = "</div></div>"
	html += starting_div
	try:
	    for token in tokenized_sentence:
	    	words = nltk.word_tokenize(token)
	    	tagged = nltk.pos_tag(words)
	    	for word in tagged:
	    		if word[1] in tagdict:
	    			html += "<a href=\"#\" data-toggle=\"tooltip\" title=\""+tagdict[word[1]][0]+"\">"+word[0]+"</a>"
	    	html += ending_div

	    	return html
	except Exception as e:
		print(str(e))

text = state_union.raw("/Users/ponrajuganesh/Desktop/data.txt") 
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')

tagdict = nltk.data.load("help/tagsets/" + "upenn_tagset" + ".pickle")
count = 0
fulldiv = ""
for sentence in sent_detector.tokenize(text):
	count += 1
	custom_sent_tokenizer = PunktSentenceTokenizer()
	fulldiv += buildhtml(custom_sent_tokenizer.tokenize(sentence), count)

print fulldiv
开发者ID:ponrajuganesh,项目名称:POSTagger,代码行数:30,代码来源:pos.py


示例12: getPresFromSpeech

def getPresFromSpeech(speech_id):
    # 2001-GWBush-1.txt
    words = speech_id.split('.')

    if len(words) > 0:
        single_words = words[0].split('-')
        if len(single_words) > 0:
            for word in single_words:
                if word.isalpha():
                    return word
    return ""

all_words = {}
for speech_id in state_union.fileids():
    text = state_union.raw(speech_id)
    words = word_tokenize(text)
    for word in words:
        if word not in all_words.keys():
            all_words[word] = 1
        else:
            all_words[word] += 1

sent_len = []
word_len = []

pres_list = []
pres_sent_total = {}
pres_word_total = {}
pres_char_total = {}
pres_uniq_word = {}
开发者ID:hbdhj,项目名称:python,代码行数:30,代码来源:state_union_style.py


示例13:

from nltk.corpus import state_union
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords

total_word_freq = {}
word_freq_per_speech = {}
word_num_per_speech = {}

total_word_num = 0

en_stopwords = stopwords.words('english')

for fileid in state_union.fileids():
    word_freq_per_speech[fileid] = {}
    word_num = 0
    sample = state_union.raw(fileid)
    words = word_tokenize(sample)
    for word in words:
        lower_word = word.lower()
        if lower_word not in en_stopwords and lower_word.isalpha():
            word_num += 1
            if lower_word not in total_word_freq.keys():
                total_word_freq[lower_word] = 1
            else:
                total_word_freq[lower_word]+=1
            if lower_word not in word_freq_per_speech[fileid].keys():
                word_freq_per_speech[fileid][lower_word] = 1
            else:
                word_freq_per_speech[fileid][lower_word]+=1
    #print fileid, word_num
    word_num_per_speech[fileid] = word_num
开发者ID:hbdhj,项目名称:python,代码行数:31,代码来源:state_union_charactorize.py


示例14: extract_entities

    return entity_names

def extract_entities(taggedText):
    '''
    Create map with entity and their counts
    :param taggedText: Parsed text (output of ne chunker) in tree form
    :return: dict of entities and their freq counts
    '''
    entity_names = []
    for tree in taggedText:
        entity_names.extend(extract_entity_names(tree))
    return entity_names


#get year and words for each file
extracted= [(state_union.raw(fileid), int(fileid[:4])) for fileid in state_union.fileids()]
docs, years = zip(*extracted)

#break text down into sentences, tokens
tokens = [nltk.word_tokenize(text) for text in docs]
sents = [nltk.sent_tokenize(text.replace("\n", " ")) for text in docs]
senttokens = [[nltk.word_tokenize(sent) for sent in entry] for entry in sents]

#get counts of unique words and plot over time
unique = [len(set(words)) for words in tokens]
plt.scatter(years, unique)
plt.show()

#get unique/total ratio
ratios = [(float(len(set(words)))/float(len(words))) for words in tokens]
plt.scatter(years, ratios)
开发者ID:ab6,项目名称:QConSF-2016,代码行数:31,代码来源:Module2-DataAnalysis-Solution.py



注:本文中的nltk.corpus.state_union.raw函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python stopwords.fileids函数代码示例发布时间:2022-05-27
下一篇:
Python sentiwordnet.senti_synsets函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap