Think of your favorite NLP application that you wish to build - sentiment analysis, named entity recognition, machine translation, information extraction, summarization, recommender system, to name a few. A key step towards achieving any of the above task is - using the right set of techniques to represent text in a form that machine can easily understand.
Unlike images, where directly using the intensity of pixels is a natural way to represent the image; in case of text there is no such natural representation. No matter how good is your ML algorithm, it can do only so much unless there is a richer way to represent underlying text data. Thus, whatever NLP application you are building, it’s imperative to find a good representation for your text data. Motivated from this, the subfield of representation learning of text for NLP has attracted a lot of research interest in the past few years.
In this bootcamp, we will understand key concepts, maths, and code behind the state-of-the-art techniques for text representation. Various representation learning techniques have been proposed in literature, but still there is a dearth of comprehensive tutorials that provides full coverage with mathematical explanations as well as implementation details of these algorithms to a satisfactory depth.
This bootcamp aims to bridge this gap. It aims to demystify, both - Theory (key concepts, maths) and Practice (code) that goes into building these representation schemes. At the end of this bootcamp participants would have gained a fundamental understanding of these schemes with an ability to implement them on datasets of their interest.
- Machine learning practitioners
- Anyone (researcher, student, professional) learning NLP
- Corporates and Start-ups looking to add NLP to their product or service offerings
- This is a hands-on course and hence, participants should be comfortable with programming. Familiarity with python data stack is ideal.
- Prior knowledge of machine learning will be helpful. Participants should have some practice with basic NLP problems e.g. sentiment analysis.
- While the DL concepts will be taught in an intuitive way, some prior knowledge of linear algebra and probability theory would be helpful.
The material for the bootcamp is hosted on github. You can find slides for this workshop here.
This is from the popular bootcamp series by the speakers on NLP. Additional materials relevant would be shared prior to the bootcamp.