APIs and Datasets for Natural Languages?

WordNet is a classic data resource for English, with semantic relationships.

For English I use OpenNLP. Unfortunately, I've never saw anything Korean-related, except Google Language Detection and Translation APIs. They're quite easy to use.

MontyLingua might come in handy for an intermediate layer between English and Korean.

The Natural Language Toolkit (NLTK) is an excellent resource if you're considering Python as a language. It incorporates lots of the stuff you'd expect in a text processing/NLP environment like parsers, stemmers and part-of-speech tagging. Documentation on it is pretty good too.As for datasets, NLTK comes with a variety of annotated corpora and textual data sets for experimenting with.

Hope it helps, B.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions