I've created a parser that scrapes keywords from a PDF document. Currently, this scrapes the top keyword as well as shows the frequency (how many times) the words been repeated in the document.
At this point, I'm looking to check the frequency of specific keywords however when entering the desired keyword, it joins the word together with the top word and gives the same frequency.
Ideally, I'd like to be able to check the frequency of the keyword 1.) "GRI" 2.) "CDP"
Would great appreciate anyone's help here!
import pandas as pd
import textract
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import nltk
nltk.download('stopwords')
pdffileobj=open('sample.pdf', 'rb')
pdfreader=PyPDF2.PdfFileReader(pdffileobj)
num_pages=pdfreader.numPages
count = 0
text= " "
while count < num_pages:
pageObj = pdfreader.getPage(count)
count +=1
text += pageObj.extractText()
if text != "":
text = text
else:
text = textract.process(fileurl, method='tesseract', language='eng')
nltk.download('punkt')
tokens=word_tokenize(text)
punctuations = ['(',')',';',':','[',']',',','!','=','==','<','>','@','#','$','%','^','&','*','.','//','{','}','...','``','+',"''",]
stop_words = stopwords.words('english')
keywords = [word for word in tokens if not word in stop_words and not word in punctuations]
# print(keywords)
#At this point all the keywords in the document show up
freq = pd.Series(' '.join(keywords).split()).value_counts()
#Print results show with frequency
print(freq)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…