Starting from the late 1960s, research into AI decreased drastically due to cuts in funding during two periods.
The first began in 1966 in the aftermath of a report by ALPAC (Automatic Language Processing Advisory Committee), which declared that research into computational linguistics, in particular machine translation, was a failure, and that the applications of such systems were too limited to be of use.
This state of affairs would last roughly two decades. The 1980s saw a resurgence of optimism in the possibilities of AI due to advances in computing technology, but it was followed by another short period of decline as the new systems proved too expensive to maintain. It wasn’t until the turn of the millennium that the tech would truly catch up and progress began to accelerate.
Rise of Statistical Models
During the 1980s the field of NLP experienced a major shift lebanon mobile database toward machine learning. Before then, programs were built on algorithms based on complex, hard-coded rules. The rise of machine learning saw NLP use more computing-heavy statistical models in conjunction with the availability of large volumes of textual data.
Yet again, machine translation was one of the early adopters of statistical frameworks for natural language processing. Other fields of NLP research would also come to discard the older rules-based methods in favor of statistical models.
How NLP is Used Today
Natural language processing has come a long way since the creation of the Turing test. And while machines have yet to pass it, there are many new use cases for natural language processing that have developed especially in recent years.