NLP | Phrases

Counters | File handling | NLP | Python Methods and Functions

Phrases — these are two or more words that often appear together, for example, " USA" . There are many other words that can come after United such as United Kingdom and United Airlines. As with many aspects of natural language processing, context is very important. And for the phrase context — it's all. 
In the case of phrases, the context will be the document as a list of words. Finding phrases in this word list means finding common phrases that often appear throughout the text.

Link to data — Monty Python and the Holy Grail Scenario

Code # 1: Loading Libraries

from nltk.corpus import webtext

 
# use to find bigrams, which are pairs of words

from nltk.col locations import BigramCollocationFinder

from nltk.metrics import BigramAssocMeasures

Code # 2: Let's find phrases

# Loading data

words = [w.lower () for w in webtext.words (

' C: Geeksforgeeks python_and_grail.txt' )]

 

biagram_col location = BigramCollocationFinder.from_words (words)

biagram_collocation.nbest (BigramAssocMeasures.likelihood_ratio, 15 )

Output:

 [("' ",' s'), ('arthur',': '), (' # ',' 1'), ("' ",' t'), ('villager',' # '), (' # ',' 2'), (']', '['), ('1',': '), (' oh', ','), ('black',' knight'), ('ha',' ha'), (':', 'oh'), (" '", 're'), (' galahad', ':'), ('well',' oh') 

from the code above, finding colocation in this way is not very useful. So the code below is an improved version by adding a word filter to remove punctuation and stop words.

Code # 3:

from nltk.corpus import stopwords

 

stopset = set (stopwords.words ( 'english' ))

filter_stops = lambda w: len (w) & lt ;  3 or w in stopset

 
biagram_collocation.apply_word_filter (filter_stops)

biagram_collocation.nbest (BigramAssocMeasures.likelihood_ratio, 15 )

Output:

 [('black',' knight'), ('clop',' clop'), ('head',' knight'), ('mumble', 'mumble'), (' squeak', 'squeak'), (' saw', 'saw'), (' holy', 'grail'), (' run', 'away'), (' french', 'guard'), (' cartoon', 'character'), (' iesu', 'domine'), (' pie', 'iesu'), (' round', 'table'), (' sir', 'robin'), (' clap', 'clap')] 

How does this work in code?

  • BigramCollocationFinder creates two frequency distributions:
    • one for each word
    • other for bigrams.
  • Frequency distribution — it is basically an extended Python dictionary where the keys — this is what counts, and the values ​​— these are counters.
  • Any filtering functions reduce the size by removing any words that do not pass the filter
  • Using the filtering function to remove all words of one or two characters and all English stops -words produces a much cleaner result.
  • After filtering, the collocation finder is ready to search for collocations.

Code # 4: Working on triplets instead of pairs.

# Loading libraries

from nltk.collocations import TrigramCollocationFinder

from nltk.metrics import TrigramAssocMeasures

 
# Load data - text file

words = [w.lower () for w in webtext.words (

'C: Geeksforgeeks python_and_grail.txt' )]

 

trigram_collocation = TrigramCollocationFinder.from_words (words)

trigram_collocation.apply_word_filter (filter_stops)

trigram_collocation.apply_freq_filter ( 3 )

 

trigram_collocation.nbest (TrigramAssocMeasures.likelihood_ratio,   15 )

Output:

 [('clop',' clop', 'clop'), (' mumble', 'mumble', 'mumble'), (' squeak', 'squeak',' squeak'), ('saw',' saw', 'saw'), (' pie', 'iesu',' domine'), ('clap ',' clap', 'clap'), (' dona', 'eis',' requiem'), ('brave',' sir', 'robin'), (' heh', 'heh',' heh '), (' king', 'arthur',' music'), ('hee',' hee', 'hee'), (' holy', 'hand',' grenade'), ('boom', 'boom',' boom'), ('...', 'dona',' eis'), ('already',' got', 'one')] 




Tutorials