Author | Chen Daxin
last night, EMNLP 2020 officially opened online!
is a top international conference in the field of natural language processing hosted by the SIGDAT group under the International Linguistic Society (ACL). EMNLP is held once a year. Last year, it was held in Hong Kong in conjunction with IJCNLP. This year it was held online due to the epidemic.
Maybe a lot of friends missed the opening ceremony last night. Don't worry, AI Technology Review now shows you all to understand the opening ceremony of EMNLP 2020!
1
Conference submission data summary
According to EMNLP 2020 conference program co-chair, Professor Yulan He of the University of Warwick in the UK, EMNLP 2020 conference received a total of 3677 submissions, of which 3359 were valid submissions.
caption: Professor Yulan He
paper submission data:
The above figure shows the status of EMNLP paper submissions since 2017. From the data above, it is not difficult to see that the number of EMNLP submissions has increased wildly every year in the past few years. Last year, the number of submissions this year increased by 16%. If this growth rate is followed, the number of submissions for EMNLP next year will undoubtedly exceed 4000.
paper acceptance rate:
The above figure shows the total acceptance rate of EMNLP papers and the acceptance rate of long/short articles since 2017. The overall data is similar at a glance.
However, it is not difficult to see from the above figure that whether it is the overall acceptance rate (blue cylinder), the acceptance rate of long (orange cylinder) or short (white) papers, this year's acceptance rate is the lowest in the past four years.
This EMNLP 2020 accepted a total of 752 papers at the main conference, including 602 long papers and 150 short papers.
Among them, the acceptance rate of long papers is 24.6%, which is almost the same as in previous years, while the acceptance rate of short papers is significantly lower than in the past few years.
Ranking of paper submission/acceptance rates by country:
This conference has a total of submissions from 57 different countries. The above picture only shows countries with more than ten paper submissions. The top seven countries for the number of papers submitted by
are:
the United States, China, the United Kingdom, Germany, India, Canada, Japan...
In addition, as last year, both China and the United States have over 1,000 papers this time Contributed, ranking firmly among the top two in the world.
However, in the data above, the acceptance rate of papers in China and the United States is not the top two, but the United Kingdom, Singapore and Denmark. These three countries are in the forefront, with an acceptance rate of about 30%, while the United States has 26.6%. Acceptance rate.
Looking back at China, the acceptance rate of papers is only 13.2%, which is far lower than the average acceptance rate of the conference.
2
More data
This year, EMNLP 2020 brings an innovation: "Findings of ACL: EMNLP 2020".
This is a new type of accepted papers, EMNLP said, this will enable more high-quality papers (short and long) to be accepted. It is used to publish work that was not accepted at the main meeting, but was evaluated as solid enough by the program committee, and its substance, quality and novelty were sufficiently guaranteed. These papers will be included as part of the ACL anthology.
AI Technology Review has given a detailed introduction to Findings. Interested readers can move to EMNLP 2020. The admission results have been released. I heard that you are Findings? One article. Comparison data of
main conference and Findings:
The above figure shows the average review scores of the main conference and Findings accepted papers.
It can be seen that most of the main conference papers have an average review score of more than 3.67, and papers with an average score of more than 3.5 have a high probability of being accepted by the main conference.
Papers with an average score of 3.17-3.5 are more likely to be accepted by Findings.
Conference paper subject classification data:
this year's submissions were divided into 20 topics by the conference, of which 8 categories have received more than 200 submissions.
NLP has the most machine learning and NLP applications, with more than 300 submissions, and subsequent machine translation, information extraction, dialogue systems, language generation and sentence-level semantic analysis have also exceeded 20.
In addition, the number of submissions on the topic of NLP interpretability and model analysis has increased significantly this year. This is a new topic introduced by ACL2020. ACL2020 has received 95 paper submissions. This number has doubled in EMNLP 2020, which shows that the community's interest in the topic of NLP interpretability and model analysis has grown rapidly. At the main conference of
, the acceptance rate of all topics exceeded 20%, and the acceptance rate of the topic of interpretability and model analysis was 27%. Small topics such as phonetics, morphology and word segmentation, syntax, lexical semantics Learning and language theory also have an acceptance rate of over 27%.
3
review process
Legend: Trevor Cohn
, after the conference program co-chair Yulan He, another program chair of this conference, Professor Trevor Cohn of the University of Melbourne introduced the review process of this conference.
This review will have more than 3000 members, and the review will be carried out according to a hierarchical structure:
The conference requires all papers to nominate at least one author as a reviewer and divide them into different research fields.
also uses their academic data for each reviewer to crawl their paper/publication records to identify more senior reviewers.
Reviewers’ published papers:
The above figure shows the number of past publications of each reviewer. Compared with the fact that nearly half of the reviewers of ICLR 2019 have not published papers in the corresponding field, the review of EMNLP 2020 The manuscript situation seems to be much better.
Findings:
The papers accepted by "Findings" will be displayed in the workshop. In order to test whether the innovation of "Findings" is successful, the organizing committee asked the authors whether they want to withdraw the manuscript in more than 100 Findings submissions. Finally, 86% of them The author did not choose to withdraw.
4
Chinese/Chinese high-yield scholars
According to incomplete statistics from the AI Technology Review, the team of Dr. Lidong Bing from the Natural Language Intelligence Laboratory of the Dharma Academy and the Xiong Caiming team of Salesforce AI have 9 papers selected as the main conference at this year’s EMNLP conference , Became the author with the most selected main conference papers in the world.
At the same time, Tsinghua University Liu Zhiyuan’s team has 8 papers selected for the main conference, Harbin Institute of Technology Liu Ting’s team has 7 main conference papers, and the teams of Professor Han Jiawei, Professor Zhou Ming, and Professor Huang Xuanjing each have 6 papers selected.
The following AI technology reviews give a brief introduction to the admissions of these scholars in this EMNLP 2020 paper.
Bing Lidong of Dharma Academy
Dr. Bing Lidong is currently in the Natural Language Intelligence Laboratory of Dharma Academy. He obtained his Ph.D. from the Chinese University of Hong Kong and was a postdoctoral researcher in machine learning at Carnegie Mellon University. His research interests include low-resource natural language processing, sentiment analysis, text generation/summarization, information extraction, knowledge base, etc.
personal homepage: https://lidongbing.github.io/
The following are the accepted papers of all the main conferences of Dr. Lidong Bing’s team:
1, "ENT-DESC: Entity Description Generation by Exploring Knowledge Graph" Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu and Luo Si.
2, "APE:Argument Pair Extraction from Peer Review and Rebuttal via Multi-task Learning" Liying Cheng, Lidong Bing, Qian Yu, Wei Lu and Luo Si.
3, "DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks" BOSHENG DING, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si and Chunyan Miao.
4, "Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation" Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, ZUOZHU LIU and Lidong Bing.
5, "Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training" Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng and Lidong Bing .
6, "Partially-Aligned Data-to-Text Generation with Distant Supervision" Zihao Fu, Bei Shi, Wai Lam, Lidong Bing and Zhiyuan Liu.
7, "Position-Aware Tagging for Aspect Sentiment Triplet Extraction" Lu Xu, Hao Li , Wei Lu and Lidong Bing.
8, "An Un supervised Sentence Embedding Method by Mutual Information Maximization" Yan Zhang, Ruidan He, ZUOZHU LIU, Kwan Hui Lim and Lidong Bing.
9, "Aspect Sentiment Classification with Aspect-Specific Opinion Spans Lu Xu, Lidong Bing, Wei Lu and Fei Huang" Xu, Lidong Bing, Wei Lu and Fei Huang.
Tsinghua University Liu Zhiyuan
Liu Zhiyuan, Associate Professor of Department of Computer Science and Technology, Tsinghua University. He received a bachelor's degree in engineering and a doctorate degree from the Department of Computer Science and Technology of Tsinghua University in 2006 and 2011, respectively. His research interests include natural language processing and social computing. He has published more than 90 papers in international journals and conferences, including ACM Transactions, IJCAI, AAAI, ACL and EMNLP.
personal homepage: http://nlp.csai.tsinghua.edu.cn/~lzy/
The following are the accepted papers of Liu Zhiyuan’s team EMNLP 2020:
1, "Coreferential Reasoning Learning for Language Representation" Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun and Zhiyuan Liu.
2, "Dynamic Anticipation and Completion for Multi-Hop Reasoning over Sparse Knowledge Graph"
Xin Lv, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Wei Zhang, YICHI ZHANG, Hao Kong and Suhui Wu.
3, "Learning from Context or Names? An Empirical Study on Neural Relation Extraction" Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun and Jie Zhou.
4, Exploring and Evaluating Attributes, Values, and Structures for Entity Alignment》Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, Zhiyuan Liu and Tat-Seng Chua.
5,《MAVEN: A Massive General Domain Event Detection Dataset》Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin and Jie Zhou.
z0 z6, "Partially-Aligned Data-to-Text Generation with Distant Supervision"Zihao Fu, Bei Shi, Wai Lam, Lidong Bing and Zhiyuan Liu.
7, "Train No Evil: Selective Masking for Task-Guided Pre-Tuxraining" Yian Gu , Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu and Maosong Sun.
8, "Denoising Relation Extraction from Document-level Distant Supervision" Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin and Leyu Lin.
Xiong Caiming
Currently, Xiong Caiming is the senior research director of Salesforce AI. From June 2014 to September 2015, he worked as a postdoctoral researcher at the University of California, Los Angeles (UCLA). In 2014, he received his PhD in Computer Science and Engineering from the State University of New York at Buffalo (supervised by Professor Jason J. Corso). He received a bachelor's degree and a master's degree in computer science from Huazhong University of Science and Technology in 2005 and 2007, respectively.
personal homepage: http://cmxiong.com/
The following is Xiong Caiming team EMNLP 2020 admission paper:
Harbin Institute of Technology Liu Ting
"Personnel Plan" for leading talents in technological innovation. Director of the China Computer Society, executive director of the Chinese Information Society of China/Director of the Social Media Processing Committee (SMP), and former chairperson of the ACL and EMNLP fields of top international conferences. The main research directions of
are artificial intelligence, natural language processing and social computing. The number of papers published in the top conferences in the field of natural language processing from 2012 to 2017 ranked 8th in the world (according to Cambridge University statistics).
The following are the papers accepted by Liu Ting’s team at this EMNLP 2020 main conference:
1, "Discourse Self-Attention for Discourse Element Identification in Argumentative Student Essays" . Wei Song, Ziyao Song, Ruiji Fu, Lizhen Liu, Miaomiao Cheng and Ting Liu .
2, "Profile Consistency Identification for Open-domain Dialogue Agents" . Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu and Xiaojiang Liu.
3, "Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting" . Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu and Xiangzhan Yu.
4, "Counterfactual Off-Policy Training for Neural Dialogue Generation" . Qingfu Zhu, Wei-Nan Zhang, Ting Liu and William Yang Wang.
5, "Combining Self-Training and Self-Supervised Learning for Unsupervised Disfluency Detection" . Shaolei Wang, Zhongyuan Wang, Wanxiang Che and Ting Liu.
6, "Multi-Stage Pre-training for Automated Chinese Essay Scoring" . Wei Song, Kai Zhang, Ruiji Fu, Lizhen Liu, Ting Liu a nd Miaomiao Cheng.
7, "Is Graph Structure Necessary for Multi-hop Question Answering?" . Nan Shao, Yiming Cui, Ting Liu, Shijin Wang and Guoping Hu.
University of Illinois Han Jia Wei
Han Jiawei, University of Illinois at Urbana-Champaign, USA , Academician of IEEE and ACM, Director of American Information Network Academic Research Center. Served as the chairman of the program committee of internationally renowned conferences such as KDD, SDM and ICDM, founded the ACM TKDD journal and served as the chief editor. He has published more than 600 papers in the fields of data mining, databases and information networks, and has a high prestige in the field of data mining.
The following are the papers accepted by the Han Jiawei team at this EMNLP 2020 main conference:
1, "Multi-document Summarization with Maximal Marginal Relevance-guided Reinforcement Learning"
. Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren and Jiawei Han.
2, " Near-imperceptible Neural Linguistic Steganography via Self-Adjusting Arithmetic Coding》
.Jiaming Shen, Heng Ji and Jiawei Han.
3、《SynSetExpan: An Iterative Framework for Joint Entity Set Expansion and Synonym Discovery》.
z0 ShangzJiaming Shen, Wenda Qiu, Jingbo Qiu, Jingbo , Michelle Vanni, Xiang Ren and Jiawei Han.4, "Understanding the Difficulty of Training Transformers"
. Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen and Jiawei Han.
5, "Text Using Label Names Only: A Language Model Self-Training Approach》
.Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang and Jiawei Han.
6、《Weakly-Supervised Aspect-Based Sentiment Analysis via Joint Aspect-Sentiment Topic Embedding》
.Jiaxin Huang, Yu Meng, Fang Gu o, Heng Ji and Jiawei Han
Microsoft Research Asia Zhou Ming
Zhou Ming, Vice President of Microsoft Research Asia, Chairman of International Association for Computational Linguistics (ACL), Director of China Computer Society, Director of Chinese Information Technology Committee, Director of Terminology Working Committee , Executive director of the Chinese Information Society of China, doctoral tutor of Harbin Institute of Technology, Tianjin University, Nankai University, Shandong University and many other schools.
personal homepage: https://www.microsoft.com/en-us/research/people/mingzhou/
The following are the accepted papers of Zhou Ming’s team at this EMNLP 2020 main conference:
1, "Pre-training for Abstractive Document Summarization by Reinstating" Source Text》
. Yanyan Zou, Xingxing Zhang, Wei Lu, Furu Wei and Ming Zhou.
2, 《Neural Deepfake Detection with Factual Structure of Text》.
Wanjun Zhong, Duyu Tang, Zenan Xu, Ruize Wang, Nan Duan, Ming Zhou , Jiahai Wang and Jian Yin.
3, "Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space"
. Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Jiancheng Lv, Nan Duan and Ming Zhou.
4, "Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection"
. Ruize Wang, Duyu Tang, Nan Duan, Wanjun Zhong, Zhongyu Wei, Xuanjing Huang, Daxin Jiang and Ming Zhou.
5 , "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing".
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei and Ming Zh ou.
Short Papers
6, "Improving the Efficiency of Grammatical Error Correction with Erroneous Span Detection and Correction"
.Mengyun Chen, Tao Ge, Xingxing Zhang, Furu Wei and Ming Zhou.
Huang Xuanjing
z2, Professor, School of Computer Science, Fudan University 2008 From 2009 to 2009, he was a visiting scholar at UMass Amherst CIIR. Her research interests include natural language processing, information retrieval, artificial intelligence, deep learning, etc. She has published dozens of papers at top conferences, including SIGIR, ACL, ICML, IJCAI, AAAI, CIKM, ISWC, EMNLP, WSDM and COLING. She has served as the PC co-chair of NLPCC 2017, CCL 2016, SMP 2015 and SMP 2014...
Google Scholar Homepage: https://scholar.google.com/citations?user=RGsMgZA4H78C&hl=en
The following are the papers accepted by Huang Xuanjing's team at this EMNLP 2020 main conference:
1, "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis".
Xiaoyu Xing, Zhijing Jin, Di, Jin, Zhang Bingning and Xuanjing Huang.
2, "A Knowledge-Aware Sequence-to-Tree Network for Math Word Problem Solving".
Qinzhuo Wu, Qi Zhang, Jinlan Fu and Xuanjing Huang.
3, "Uncertainty-Aware Label Refinement for Sequence Labeling".
Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong and Xuanjing Huang.
4, "Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection"
. Ruize Wang, Duyu Tang, Nan Duan , Wanjun Zhong, Zhongyu Wei, Xuanjing Huang, Daxin Jiang and Ming Zhou.
5, "PathQG: Neural Question Generation from Facts"
. Siyuan Wang, Zhongyu Wei, Zhihao Fan, Zengfeng Huang, Weijian Sun, Qi ZHANG and Xuanjing Huang.
6, "RethinkCWS: Is Chinese Word Segmentation a Solved Task?"
. Jinlan Fu, Pengfei Liu, Qi Zhang and Xuanjing Huang.
Finally, AI Technology Review wishes Chinese scholars good results in the best papers of EMNLP 2020~