NLP | Customs corps



How to do this?
NLTK already defines a list of data paths or directories in nltk.data.path  Our custom corpuses must be present in any of these paths for NLTK to find them. 
We can also create our own nltk_data directory in our home directory and make sure it is in the list of known paths given in nltk.data.path.

Code # 1: Create a custom directory and validate.

# importing libraries

import os, os.path

 
# using the specified path

path = os.path.expanduser ( `~ / nltk_data` )

 
# check

if not os.path.exis ts (path):

os.mkdir (path)

 

print ( "Does path exists:" , os.path.exists (path))

 

 

import nltk.data

print ( "Does path exists in nltk:"

  path in nltk.data.path)

Exit:

 Does path exists: True Does path exists in nltk: True 

Code # 2: Create a wordlist file .

# loading libraries

import nltk.data

 

nltk.data.load ( `corpora / cookbook / word_file.txt` , format = ` raw` )

Output:

 b` nltk `

How does it all work?

  • nltk.data.load () recognizes formats & # 8212 ; “Raw”, “pickle” and “yaml”.
  • Assumes format is based on file extension if no format is specified.
  • As in the above code, format must be specified “Raw”.
  • As in the above code, you need to specify the format “raw”.
  • If the file ends with “.yaml”, then you do not need to specify the format.

Code # 3: How to Download YAML File

import nltk.data

 
# download file along the path

nltk.data.load ( `corpora / cookbook / synonyms.yaml` )

Exit:

 {`bday`:` birthday`}