Think of your favorite NLP application that you wish to build - sentiment analysis, named entity recognition, machine translation, information extraction, text summarization, recommender system, to name a few. Recent advances in DL have acted as a great catalyst for pushing the boundaries of NLP
However, feature engineering still remains a critical coponent for any NLP task. Unlike images, where directly using the intensity of pixels is a natural way to represent the image; in case of text there is no such natural representation. No matter how good is your ML/DL algorithm, it can do only so much unless there is a richer way to represent underlying text data. Thus, whatever NLP application you are building, it’s imperative to find a good representation for your text data.
In this bootcamp, we will understand key concepts, maths, and code behind the state-of-the-art NLP techniques. Various representation learning techniques have been proposed in literature, but still there is a dearth of comprehensive tutorials that provides full coverage with mathematical explanations as well as implementation details of these algorithms to a satisfactory depth.
This bootcamp aims to bridge this gap. It aims to demystify, both - Theory (key concepts, maths) and Practice (code) that goes into building NLP models. At the end of this bootcamp participants would have gained a fundamental understanding of these approaches with an ability to implement them on datasets of their interest.
- Data Science practitioners
- Corporates and Start-ups working with NLP
- Anyone (researcher, student, professional) working NLP
This is a very hands-on course and hence, participants should be comfortable with programming. Familiarity with python data stack is ideal. Prior knowledge of machine learning will be helpful.
This is from the popular bootcamp series by the speakers on NLP. Additional materials relevant would be shared prior to the bootcamp.