Theoretical Basis of a Context-Based Language Model for Semantic Classification

Abstract

Language modeling (LM) has proved to be very useful throughout the field of Statistical Natural Language Processing (SNLP) and Automatic Speech Recognition (ASR). With subtle modifications, the present article aims to manifest another use to it by introducing a novel language model that I call a context-based language model. In particular I will design the machinery which ties this statistical technique to a Natural Language Understanding (NLU) problem, addressed by Formal Semantics, known as the proviso problem. First I define what I am referring to as a language and a language model. Then I will briefly discuss the proviso problem, and finally a context-based language model will be offered to cover this specific natural language behavior responsible for giving rise to the proviso problem.


Back to Table of Contents