不少人通過知乎或微信給我要論文的鏈接,統一發一下吧,後續還有DST、DPL、遷移學習在對話系統的應用、強化學習在對話系統的應用、memory network在對話系統的應用、GAN在對話系統的應用等論文,整理後發出來,感興趣的可以期待一下。

SLU

1.SLU-Domain/Intent Classification

1.1 SVM MaxEnt

1.2 Deep belief nets (DBN)

Deep belief nets for natural language call-routing, Sarikaya et al., 2011

1.3 Deep convex networks (DCN)

Towards deeper understanding: Deep convex networks for semantic utterance classification, Tur et al., 2012

1.4 Extension to kernel-DCN

Use of kernel deep convex networks and end-to-end learning for spoken language understanding, Deng et al., 2012

1.5 RNN and LSTMs

Recurrent Neural Network and LSTM Models for Lexical Utterance Classification, f="microsoft.com/en-us/res">Ravuri et al., 2015

1.6 RNN and CNNs

Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks, href="arxiv.org/pdf/1603.0382">Lee et al,2016 NAACL

2. SLU – Slot Filling

2.1.RNN for Slot Tagging

Bi-LSTMs and Input sliding window of n-grams

2.1.1 Recurrent neural networks for language understanding, interspeech 2013

2.1.2 Using recurrent neural networks for slot filling in spoken language understanding, Mesnil et al, 2015

2.1.3.

Encoder-decoder networks

Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling, Kurata et al., EMNLP 2016

Attention-based encoder-decoder

Exploring the use of attention-based recurrent neural networks for spoken language understanding, Simonnet et al., 2015

2.2 Multi-task learning

2.2.1 , Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding ,Jaech et al., Interspeech 2016

Joint Segmentation and Slot Tagging

2.2.2 Neural Models for Sequence Chunking, Zhai et al., AAAI 2017

Joint Semantic Frame Parsing

2.2.3, Slot filling and intent prediction in the same output sequence

Multi-Domain Joint Semantic Frame Parsing using Bi-directional RNN-LSTM , Hakkani-Tur et al., Interspeech 2016

2.2.4 Intent prediction and slot filling are performed in two branches

Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling, Liu and Lane, Interspeech 2016

3.Contextual LU

3.1 Context Sensitive Spoken Language Understanding using Role Dependent LSTM layers, Hori et al, 2015

3.2 E2E MemNN for Contextual LU

End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding, Chen et al., 2016

3.3 Sequential Dialogue Encoder Network

Sequential Dialogue Context Modeling for Spoken Language Understanding, Bapna et.al., SIGDIAL 2017

4 Structural LU

4.1 K-SAN:prior knowledge as a teacher,Sentence structural knowledge stored as memory

Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks, Chen et al., 2016

5. SL

CRF (Wang and Acero 2006; Raymond and Riccardi 2007):

Discriminative Models for Spoken Language Understanding; Wang and Acero , Interspeech, 2006

Generative and discriminative algorithms for spoken language understanding; Raymond and Riccardi, Interspeech, 2007

Puyang Xu and Ruhi Sarikaya. Convolutional neural network based triangular crf for joint intent detection and slot filling.

RNN (Yao et al. 2013; Mesnil et al. 2013, 2015; Liu and Lane 2015);

Recurrent neural networks for language understanding, interspeech 2013

Using recurrent neural networks for slot filling in spoken language understanding, Mesnil et al, 2015

Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding.;Mesnil et al, Interspeech, 2013

Recurrent Neural Network Structured Output Prediction for Spoken Language Understanding, Liu and Lane, NIPS, 2015

LSTM (Yao et al. 2014)

Spoken language understanding using long short-term memory neural networks

6. SL + TL

Instance based transfer for SLU (Tur 2006);

Gokhan Tur. Multitask learning for spoken language understanding. In 2006 IEEE

Model adaptation for SLU (Tür 2005);

G?khan Tür. Model adaptation for spoken language understanding. In ICASSP (1), pages 41–44. Citeseer, 2005.

Parameter transfer (Yazdani and Henderson)

A Model of Zero-Shot Learning of Spoken Language Understanding

_______________________________________________________________________________________

NLG

Tradition

Template-Based NLG

Plan-Based NLG (Walker et al., 2002)

Class-Based LM NLG

Stochastic language generation for spoken dialogue systems, Oh and Rudnicky, NAACL 2000

Phrase-Based NLG

Phrase-based statistical language generation using graphical models and active learning, Mairesse et al, 2010

RNN-Based LM NLG

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking, Wen et al., SIGDIAL 2015

Semantic Conditioned LSTM

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

, Wen et al., EMNLP 2015

Structural NLG

Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings, Du?ek and Jur?í?ek, ACL 2016

Contextual NLG

A Context-aware Natural Language Generator for Dialogue Systems, Du?ek and Jur?í?ek, 2016

Controlled Text Generation

Toward Controlled Generation of Text , Hu et al., 2017

  1. NLG-Traditional

Marilyn A Walker, Owen C Rambow, and Monica Rogati. Training a sentence planner for spoken dialogue using boosting.

2. NLG-Corpus based

Alice H Oh and Alexander I Rudnicky. Stochastic language generation for spoken dialogue systems. href="speech.cs.cmu.edu/Commu">Oh et al. 2000

Fran?ois Mairesse and Steve Young. Stochastic language generation in dialogue using factored language models. Mairesse and Young 2014

3. NLG-Neural Network

Recurrent neural network based language model.

Extensions of recurrent neural network language model

Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking

Semantically conditioned lstm-based natural language generation for spoken dialogue systems.

4. Transfer learning for NLG

Recurrent neural network based languagemodel personalization by social network crowdsourcing. In INTERSPEECH, 2013. Wen et al., 2013

Yangyang Shi, Martha Larson, and Catholijn M Jonker. Recurrent neural network language model adaptation with curriculum learning. Shi et al., 2015

Multi-domain neural network language generation for spoken dialogue systems. Wen et al., NAACL 2016

如果還有疑問,歡迎通過公眾號:AI部落聯盟聯繫我。我很希望和志同道合的人一起學習和討論的。


推薦閱讀:
相关文章