Special Section
Transferable neural models for language understanding
概况
Language understanding, dealing with machine reading comprehension in various forms such as question answering, machine translation and language dialog, has been an aspiration of the artificial intelligence community, but has limited success until recently. Due to the success of deep neural networks, there is a resurgence of interest in research on deep neural networks applied to language understanding. The most recent research in language understanding aims to build deep neural network models that can be used for various language understanding tasks, such as paraphrasing, question answering, machine translation, spoken dialog, and text categorization. However, these models are (1) very data hungry – requiring large training data; (2) very task specific – hard to generalize the model for one task to other related tasks. To solve these problems, recently, transfer learning has been applied to language understanding. Transfer learning is a learning paradigm that aims to apply knowledge gained while solving one problem to a different but related problem. Transfer learning builds a neural model for one language understanding task with large training data, and then the model is retrained for another task with small training data.
重要日期
Paper submission: Dec 15, 2018
Paper Acceptance Notification: January 30, 2019
征稿范围
Topics of interest for this special session include but are not limited to:
• Deep learning
• Multi-task learning
• Transfer learning
• Active learning
• Self-taught learning
• Domain adaption
• Question answering
• Paraphrase
• Natural language inference
• Sequence to sequence learning
• Natural language generation
• Machine translation
• Summarization
• Information extraction
注意事项
When submitting your paper via https://ieee-cis.org/conferences/ijcnn2019/upload.php, please select “S33: Transferable neural models for language understanding”
联系人
领取专属 10元无门槛券
私享最新 技术干货