Reading Group Blog -- Semantically Equivalent Adversarial Rules for Debugging NLP Models (ACL 2018)
The Stanford Natural Language Processing Group
by Allen Nie
3y ago
In the second post, we will focus on this paper: Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Semantically equivalent adversarial rules for debugging nlp models." Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vol. 1. 2018. Robustness is a central concern in engineering. Our suspension bridges need to stand against strong wind so it won't collapse like the Tacoma Narrows Bridge [video]. Our nuclear reactors need to be fault tolerant so that the Fukushima Daiichi incident won't happen in the future [link]. When we ..read more
Visit website
Reading Group Blog -- LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better (ACL 2018)
The Stanford Natural Language Processing Group
by Robin Jia
3y ago
Welcome to the Stanford NLP Reading Group Blog! Inspired by other groups, notably the UC Irvine NLP Group, we have decided to blog about the papers we read at our reading group. In this first post, we'll discuss the following paper: Kuncoro et al. "LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better." ACL 2018. This paper builds upon the earlier work of Linzen et al.: Linzen et al. "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies." TACL 2016. Both papers address the question, "Do neural language models actually learn to model s ..read more
Visit website
A New Multi-Turn, Multi-Domain, Task-Oriented Dialogue Dataset
The Stanford Natural Language Processing Group
by Mihail Eric
3y ago
Task-oriented dialogue focuses on conversational agents that participate in user-initiated dialogues on domain-specific topics. Traditionally, the task-oriented dialogue community has often been hindered by a lack of sufficiently large and diverse datasets for training models across a variety of different domains. In an effort to help alleviate this problem, we release a corpus of 3,031 multi-turn dialogues in three distinct domains appropriate for an in-car assistant: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our dialogues are grounded through knowl ..read more
Visit website
CS224n Competition on The Stanford Question Answering Dataset with CodaLab
The Stanford Natural Language Processing Group
by Pranav Rajpurkar, Stephen Koo, and Percy Liang
3y ago
The Stanford Question Answering Dataset (SQuAD) is a reading comprehension benchmark with an active and highly-competitive leaderboard. Over 17 industry and academic teams have submitted their models (with executable code) since SQuAD’s release in June 2016, leading to the advancement of novel deep learning architectures which have outperformed baseline models by wide margins. As teams compete to build the best machine comprehension system, the challenge of rivaling human-level performance still remains open. SQuAD is a unique large-scale benchmark in that it uses a hidden test set for officia ..read more
Visit website
Interactive Language Learning
The Stanford Natural Language Processing Group
by Nadav Lidor, Sida I. Wang
3y ago
Today, natural language interfaces (NLIs) on computers or phones are often trained once and deployed, and users must just live with their limitations. Allowing users to demonstrate or teach the computer appears to be a central component to enable more natural and usable NLIs. Examining language acquisition research, there is considerable evidence suggesting that human children require interactions to learn language, as opposed to passively absorbing language, such as when watching TV (Kuhl et al., 2003, Sachs et al., 1981). Research suggests that when learning a language, rather than conscious ..read more
Visit website
In Their Own Words: The 2016 Graduates of the Stanford NLP Group
The Stanford Natural Language Processing Group
by Stanford NLP
3y ago
This year we have a true bumper crop of graduates from the NLP Group - ten people! We're sad to see them go but excited for all the wonderful things they're off to do. Thanks to them all for being a part of the group and for their amazing contributions! We asked all the graduates to give us a few words about what they did here and where they're headed - check it out! PhD Students Gabor Angeli My Ph.D. focused on natural language understanding. Early in the program, I worked on semantic parsing for temporal expressions, before moving on to relation extraction -- I was actively involved in Stanf ..read more
Visit website
Hybrid tree-sequence neural networks with SPINN
The Stanford Natural Language Processing Group
by Jon Gauthier
3y ago
This is a cross-post from my personal blog. We’ve finally published a neural network model which has been under development for over a year at Stanford. I’m proud to announce SPINN: the Stack-augmented Parser-Interpreter Neural Network. The project fits into what has long been the Stanford research program, mixing deep learning methods with principled approaches inspired by linguistics. It is the result of a substantial collaborative effort also involving Sam Bowman, Abhinav Rastogi, Raghav Gupta, and our advisors Christopher Manning and Christopher Potts. This post is a brief introduction to ..read more
Visit website
How to help someone feel better: NLP for mental health
The Stanford Natural Language Processing Group
by Kevin Clark and Tim Althoff
3y ago
Natural language processing (NLP) allows us to tag, parse, and even extract information from text. But we believe it also has the potential to help address major challenges facing the world. Recently, we have been working on applying NLP to a serious global health issue: mental illness. In the U.S. alone, 43.6 million adults (18.1%) experience mental illness each year. Fortunately, mental health conditions can often be treated with counseling and psychotherapy, and in recent years there has been rapid growth in the availability of these treatments thanks to technology-mediated counseling. The ..read more
Visit website
Maximum Likelihood Decoding with RNNs - the good, the bad, and the ugly
The Stanford Natural Language Processing Group
by Russell Stewart
3y ago
Training Tensorflow's large language model on the Penn Tree Bank yields a test perplexity of 82. With the code provided here, we used the large model for text generation, and got the following results depending on the temperature parameter used for sampling: \tau = 1.0 The big three auto makers posted a N N drop in early fiscal first-half profit. The same question is how many increasing cash administrative and financial institutions might disappear in choosing. The man in the compelling future was considered the city Edward H. Werner Noriega's chief financial officer were unavailable for comme ..read more
Visit website
WikiTableQuestions: a Complex Real-World Question Understanding Dataset
The Stanford Natural Language Processing Group
by Ice Pasupat
3y ago
Natural language question understanding has been one of the most important challenges in artificial intelligence. Indeed, eminent AI benchmarks such as the Turing test require an AI system to understand natural language questions, with various topics and complexity, and then respond appropriately. During the past few years, we have witnessed rapid progress in question answering technology, with virtual assistants like Siri, Google Now, and Cortana answering daily life questions, and IBM Watson winning over humans in Jeopardy!. However, even the best question answering systems today still face ..read more
Visit website

Follow The Stanford Natural Language Processing Group on FeedSpot

Continue with Google
Continue with Apple
OR