Looks like you messed up your indentation: for a in words: print a syns = wn. Synsets(a) Seems like syns = wn. Synsets(a) should be inside the words for loop so you can do this for every word: for w in words: print w syns = wn.
Synsets(w) for s in syns: print print "definition:", s. Definition print "synonyms:" for l in s. Lemmas: print l.Name print "examples:" for be in s.
Examples: print be print.
A lemma has a synset attribute, which has its own part of speech in its pos attribute. So, if we have a lemma as l, we can access its part of spech like this: >>> l = Lemma('gladden.v.01. Joy') >>> l.synset.
Pos 'v' More generally, we can extend this into a loop to read through your file. I'm using the with statement because it closes files nicely once the loop is completed. >>> with open('sample.
Txt') as f: ... raw = f.read() ... for sentence in nltk. Sent_tokenize(raw): ... sentence = nltk. Wordpunct_tokenize(sentence) ... for word in sentence: ... for synset in wn.
Synsets(word): ... for lemma in synset. Lemmas: ... print lemma. Name, lemma.synset.
Pos ... If you want to make sure that you are only choosing lemmas with the same part of speech as the word that you are currently talking about, then you will need to identify that word's part of speech too: >>> import nltk >>> from nltk. Corpus import wordnet as wn >>> with open('sample. Txt') as f: ... raw = f.read() ... for sentence in nltk.
Sent_tokenize(raw): ... sentence = nltk. Pos_tag(nltk. Wordpunct_tokenize(sentence)) ... for word, pos in sentence: ... print word, pos I'll leave reconciling these two as an exercise for the reader.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.