Chinese Syntax Parsing Based on Sliding Match of Semantic String
Poor Chinese-Vietnamese bilingual parallel corpus make the existing Chinese-Vietnamese machine translation unsatisfactory. Considering the differences between Chinese and Vietnamese, we proposed a method of Chinese-Vietnamese tree-to-tree Statistical Machine Translation with language features. Lingual difference feature plays a good supervised role on machine translation. Analyzing the syntactic differences between Chinese and Vietnamese, we define some rules of language difference, attributive postposition award, time adverbial postposition award and locative adverbial postposition award .On the basis of Chinese-Vietnamese bilingual word-aligned corpus, these awards are combined into extract tree-to-tree translation rules. These defined rules are used to constraint the decoding, to prune and optimize the candidate sentences, and as a result, we acquire the optimal translation sequence. The experiments of Chinese-Vietnamese bilingual sentence translation showed that the proposed method performs well and that syntax difference features can greatly improve the efficiency and accuracy of the translation.
With the advent of social network services, Arabs opinions on the web have attracted many researchers in recent years, toward detecting and classifying sentiments in Arabic tweets and reviews. However, the impact of word embeddings vectors (WEVs) initialization and dataset balance on Arabic sentiment classification using deep learning has not been thoroughly studied. In this paper, a multi-channel embedding convolutional neural network (MCE-CNN) is proposed to improve Arabic sentiment classification by learning sentiment features from different text domains, word and character n-grams levels. MCE-CNN encodes a combination of different pre-trained word embeddings into the embedding block at each embedding channel and trains these channels in parallel. Besides, a separate feature extraction module implemented in a CNN block is used to extract more relevant sentiment features. These channels and blocks help to start training on high-quality WEVs and fine-tuning them. The performance of MCE-CNN is evaluated on several standard balanced and imbalanced datasets to reflect real-world using cases. Experimental results show that MCE-CNN provides a high classification accuracy and benefits from the second embedding channel on both standard Arabic and dialectal Arabic text, which outperforms state-of-the-art methods.
Translation quality estimation is an important task in machine translation, which has attracted increasing interest in recent years. A key problem in translation quality estimation is a lack of sufficient amount of the quality annotated training data. To address this lack, the Predictor-Estimator was recently proposed by introducing ?word prediction? as an additional pre-subtask that predicts a current target word with consideration of surrounding source and target contexts, thereby leading to two-stage neural models: a predictor and an estimator. However, the original Predictor-Estimator is not trained on a continuous stacking model but instead trained in a cascaded manner that separately trains the predictor from the estimator. In addition, the Predictor-Estimator is trained based on only single-task learning, which uses target-specific quality estimation data, without using training data that are available from other tasks. In this paper, we thus propose multi-task stack propagation, which extensively applies stack propagation to fully train the Predictor-Estimator on a continuous stacking architecture and multi-task learning to enhance the training data from other related quality estimation tasks. Experimental results on WMT17 quality estimation datasets show that the Predictor-Estimator trained using multi-task stack propagation provides statistically significant improvements over the baseline models. In particular, under an ensemble setting, the proposed multi-task stack propagation leads to state-of-the-art performance for all quality estimation tasks (at sentence/word/phrase levels) for WMT17 quality estimation tasks.
Chinese zero pronoun (ZP) resolution plays a critical role in discourse analysis. Different from traditional mention to mention approaches, this paper proposes a chain to chain approach to improve the performance of ZP resolution from three aspects. Firstly, consecutive ZPs are clustered into coreferential chains, each working as one independent anaphor as a whole. In this way, those ZPs far away from their overt antecedents can be bridged via other consecutive ZPs in the same coreferential chains and thus better resolved. Secondly, common noun phrases (NPs) are automatically grouped into coreferential chains using traditional approaches, each working as one independent antecedent candidate as a whole. That is, those NPs occurring in the same coreferential chain are viewed as one antecedent candidate as a whole, and ZP resolution is made between ZP coreferential chains and common NP coreferential chains. In this way, the performance can be much improved due to the effective reduction of search space by pruning singletons and negative instances. Thirdly and finally, additional features from ZP and common NP coreferential chains are employed to better represent anaphors and their antecedent candidates, respectively. Comprehensive experiments on the OntoNotes V5.0 corpus show that our chain to chain approach significantly outperforms the state-of-the-art mention to mention approaches. To our knowledge, this is the first work to resolve zero pronouns in a chain to chain way.
Role of Discourse Information in Urdu Sentiment Classification: A Rule-Based Method and Machine Learning Technique
Code-switching or juxtaposition of linguistic units from two or more languages in a single utterance, in recent times, has become very common in text, thanks to social media and other computer mediated forms of communication. In this exploratory study of English-Hindi code-switching on Twitter, we automatically create a large corpus of code-switched tweets and devise techniques to identify the relationship between successive components in a code-switched tweet. More specifically, we identify pragmatic functions like narrative-evaluative, negative reinforcement, translation etc. characterizing relation between successive components. We analyze the difference / similarity between switching patterns in code-switched and monolingual multi-component tweets. We observe strong dominance of narrative-evaluative (non-opinion to opinion or vice-versa) switching in case of both code-switched and monolingual multi-component tweets in around 40% cases. Polarity switching appears to be a prevalent switching phenomenon (10%) specifically in code-switched tweets (three to four times higher than monolingual multi-component tweets) where preference of expressing negative sentiment in Hindi is approximately twice compared to English. Positive reinforcement appears to be an important pragmatic function for English multi-component tweets whereas negative reinforcement plays a key role for Devanagari multi-component tweets. Our results also indicate that the extent and nature of code-switching also strongly depend on the topic (sports, politics etc.) of discussion.
Part-of-Speech (POS) tagging is a well established technology for most West European languages, and a few other world languages, but it has not been evaluated on Igbo, an agglutinative African language. This article presents POS tagging experiments conducted using an Igbo corpus as a test bed for identifying the POS taggers and the Machine Learning (ML) methods that can achieve a good performance with the small data set available for the language. Experiments have been conducted using different well-known POS taggers developed for English or European languages, and different training data styles and sizes. Igbo has a number of language-specific characteristics that present a challenge for effective POS tagging. One interesting case is the wide use of verbs (and nominalisations thereof) which have an inherent noun complement, which form linked pairs in the POS tagging scheme, but which may appear discontinuously. Another issue is Igbos highly productive agglutinative morphology, which can produce many variant word forms from a given root. This productivity is a key cause of the out-of-vocabulary (OOV) words observed during Igbo tagging. We report results of experiments on a promising direction for improving tagging performance on such morphologically-inflected OOV words.
Singlish can be interesting to the computational linguistics community both linguistically as a major low-resource creole based on English, and computationally for information extraction and sentiment analysis of regional social media. In our conference paper, Wang et al. , we investigated part-of-speech (POS) tagging and dependency parsing for Singlish by constructing a treebank under the Universal Dependencies scheme, and successfully used neural stacking models to integrate English syntactic knowledge for boosting Singlish POS tagging and dependency parsing, achieving the state-of-the-art accuracies of 89.50% and 84.47% for Singlish POS tagging and dependency respectively. In this work, we substantially extend Wang et al.  by enlarging the Singlish treebank to more than triple the size and with much more diversity in topics, as well as further exploring neural multi-task models for integrating English syntactic knowledge. Results show that the enlarged treebank has achieved significant relative error reduction of 45.8% and 15.5% on the base model, 27% and 10% on the neural multi-task model, and 21% and 15% on the neural stacking model for POS tagging and dependency parsing respectively. Moreover, the state-of-the-art Singlish POS tagging and dependency parsing accuracies have been improved to 91.45% and 85.57% respectively. We make our treebanks and models available for further research.
Transfer parsing has been used for developing dependency parsers for languages with no treebank using transfer from treebanks of other languages (source languages). In delexicalized transfer parsing the words are replaced by their part-of-speech tags. Transfer parsing may not work well if a language does not follow uniform syntactic structure with respect to its different constituent patterns. Earlier work has used information derived from linguistic databases to transform a source language treebank to reduce the syntactic differences between the source and the target languages. We propose a transformation method where a source language pattern is transformed stochastically to one of the multiple possible patterns followed in the target language. The transformed source language treebank can be used to train a delexicalized parser in the target language. We show that this method significantly improves average performance of single-source delexicalized transfer parsers. We also propose a multi-source transfer parsing approach by concatenating transformed source language treebanks and show that the multi-source parsers work better when using a subset of the source language treebanks rather than all of them or only one. The treebanks are selected greedily based on the labelled attachment scores of the corresponding single-source parser trained using the treebank after transformation.
Sentiment Analysis for a Resource Poor Language - Roman Urdu
Semantic information that has been proven to be necessary to the resolution of common noun phrases is typically ignored by most existing Chinese zero pronoun resolvers. This is because that zero pronouns convey no descriptive information, which makes it almost impossible to calculate semantic similarities between the zero pronoun and its candidate antecedents. Moreover, most of traditional approaches are based on the single-candidate model, which considers the candidate antecedents of a zero pronoun in isolation and thus overlooks their reciprocities. To address these problems, we first propose a neural network-based zero pronoun resolver (NZR) that is capable of generating vector-space semantics of zero pronouns and candidate antecedents. On the basis of NZR, we develop the collaborative filtering-based framework for Chinese zero pronoun resolution task, exploring the reciprocities between the candidate antecedents of a zero pronoun to more rationally re-estimate their importance. Experiment results on the Chinese portion of the OntoNotes corpus are encouraging: our proposed model substantially surpasses the Chinese zero pronoun resolution baseline systems.
Deep contextualized word embeddings (short for ELMo), as an emerging and effective replacement for the static word embeddings, have achieved success on a bunch of syntactic and semantic NLP problems. However, little is known about what is responsible for the improvements. In this paper, we focus on the effect of ELMo for a typical syntax problem -- universal POS tagging and dependency parsing. We incorporate ELMo as additional word embeddings into the state-of-the-art POS tagger and dependency parser, and it leads to consistent performance improvements. Experimental results show the model using ELMo outperforms the state-of-the-art baseline by an average 0.91 for POS tagging and 1.11 for dependency parsing. Further analysis reveals that the improvements mainly result from the ELMo's better abstraction ability on the out-of-vocabulary (OOV) words, and this ability is achieved by the character-level word representation in ELMo. Based on ELMo's advantage on OOV, experiments that simulate low-resource settings are conducted and the results show that deep contextualized word embeddings are effective for data-insufficient tasks where the OOV problem is severe.
Understanding causality in text is crucial for intelligent agents. In this paper, inspired by the human causality learning, we propose an experience-based causality learning framework. Comparing to traditional approaches which attempt to handle causality problem relying on textual clues and linguistic resources, we are the first to use experience information for causality learning. Specifically, we first constructs various scenarios for intelligent agents, thus, the agents can gain experience from interaction in these scenarios. Then, human participants build a number of training instances for agents causality learning based on these scenarios. Each instance contains two sentences and a label. Each sentence describes an event that agent experienced in a scenario and the label indicates whether the sentence (event) pair belongs to causal relation. Accordingly, we propose a model which can infer the causality in text using experience by accessing the corresponding event information based on the input sentence pair. Experiment results show that our method can achieve impressive performance on the grounded causality corpus and significantly outperform the conventional approaches. Our work suggests that the experience is very important for intelligent agents to understand causality.
Comparable corpora are valuable alternatives for the expensive parallel corpora. They comprise informative parallel fragments which are useful resources for different natural language processing tasks. In this work, a generative model is proposed for efficient extraction of parallel fragments from a pair of comparable documents. The core of the proposed model is a graph called the Matching Graph. The ability of the Matching Graph to be trained on a small initial seed makes it a proper model for language pairs suffering from the scarce resource problem. Experiments show that the Matching Graph performs significantly better than other recently published models. According to the experiments on English-Persian and Arabic-Persian language pairs, the extracted parallel fragments can be used instead of parallel data for training statistical machine translation systems. Results reveal that the extracted fragments in the best case are able to retrieve about 90% of the information of a statistical machine translation system which is trained on a parallel corpus. Moreover, it is shown that using the extracted fragments as additional information for training statistical machine translation systems leads to an improvement of about 2% for English-Persian and about 1% for Arabic-Persian translation on BLEU score.
Neural machine translation (NMT) has made remarkable progress in recent years, but the performance of NMT suffers from a data sparsity problem since large-scale parallel corpora are only readily available for high-resource languages (HRLs). In recent days, transfer learning (TL) has been used widely in low-resource languages (LRLs) machine translation; while TL is becoming one of the vital directions for addressing the data sparsity problem in low-resource NMT. As a solution, a transfer learning method in NMT is generally obtained via initializing the low-resource model (child) with the high-resource model (parent). However, leveraging the original TL to low-resource models is neither able to make full use of highly related multiple HRLs nor receive different parameters from the same parents. In order to exploit multiple HRLs effectively, we present a language-independent and straightforward multi-round transfer learning (MRTL) approach to low-resource NMT. Besides, with the intention of reducing the differences between high-resource and lowresource languages at the character level, we introduce a unified transliteration method for various language families, which are both semantically and syntactically highly analogous with each other. Experiments on low-resource datasets show that our approaches are effective, significantly outperform the state-of-the-art methods and yield improvements of up to 5.63 BLEU points.
Named Entity Recognition (NER) plays a pivotal role in various natural language processing tasks, such as machine translation, and automatic question-answering. Recognizing the importance of NER, a plethora NER techniques for Western and Asian languages have been developed. However, despite having over 490 million Urdu language speakers worldwide, NER resources for Urdu are either non-existent or inadequate. To fill this gap, this paper makes three key contributions. Firstly, we have developed the largest Urdu NER corpus that contains 926,776 tokens and 99,718 carefully annotated NEs. The developed corpus has more than doubled the number of manually tagged NEs as compared to any of the existing Urdu NER corpus. Secondly, we have generated four word embeddings using two different techniques, fastText and Word2vec, on two corpora of Urdu text. These are the only publicly available embeddings for the Urdu language, besides the recently released Urdu word embeddings by Facebook. Finally, we have pioneered in the application of deep learning techniques, NN and RNN, for Urdu named entity recognition. Based on the analysis of the results, several valuable insights are provided about the effectiveness of deep learning techniques and impact of word embeddings on these techniques.
Most of the syntax-based metrics obtain the similarity by comparing the sub-structures extracted from the trees of hypothesis and reference. These sub-structures cannot represent all the information in the trees because their lengths are limited. To sufficiently use the reference syntax information, a new automatic evaluation metric is proposed based on dependency parsing model. First, a dependency parsing model is trained using the reference dependency tree for each sentence. Then, the hypothesis is parsed by this dependency parsing model and the corresponding hypothesis dependency tree is generated. The quality of hypothesis can be judged by the quality of the hypothesis dependency tree. Unigram F-score is included in the new metric so that lexicon similarity is obtained. According to experimental results, the proposed metric can perform better than METEOR and BLEU on system level, and get comparable results with METEOR on sentence level. To further improve the performance, we also propose a combined metric which gets the best performance on sentence level and on system level.
Ancient Chinese brings the wisdom and spirit culture of the Chinese nation. Automatically translation from ancient Chinese to modern Chinese helps to inherit and carry forward the quintessence of the ancients. However, the lack of large-scale parallel corpus limits the study of machine translation in Ancient-Modern Chinese. In this paper, we propose an Ancient-Modern Chinese clause alignment approach based on the characteristics of these two languages. This method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on our manual annotation test set. We use this method to create a new large-scale Ancient-Modern Chinese parallel corpus which contains over 1.24M bilingual pairs. To our best knowledge, this is the first large high-quality Ancient-Modern Chinese dataset. Furthermore, we analyzed and compared the performance of the SMT and various NMT based models on this dataset and provided a strong baseline for this task.
This paper presents a comprehensive study on Burmese (Myanmar) morphological analysis, from annotated data preparation to experiment-based investigation. Twenty thousand Burmese sentences in news field are annotated with morphological information as one component of the Asian Language Treebank Project. The annotation includes two-layer tokenization and part-of-speech (POS) tagging, to provide rich information on the morphological level and on the syntactic constituent level. The annotated corpus has been released under a CC BY-NC-SA license, and it is the largest open-access database of annotated Burmese when this manuscript was prepared in 2017. Detailed descriptions of the preparation, refinement, and features of the annotated corpus are provided in the first half of the paper. Facilitated by the deliberately prepared corpus, experiment-based investigations of Burmese morphological analysis are presented in the second half of the paper, wherein the standard sequence-labeling approach for conditional random fields and a long short-term memory (LSTM) based recurrent neural network (RNN) are applied and discussed. We obtained several general conclusions on the Burmese morphological analysis task, covering the scheme design of output tags, effect of joint tokenization and POS-tagging, and importance of ensemble from the viewpoint of stabilizing the performance of LSTM-based RNN. This study provides a solid basis for further studies on Burmese processing. Owing to the present study, in terms of morphological analysis, Burmese should no longer be referred to as a low-resourced or under-studied language.
Modern Standard Arabic, as well as Arabic dialect languages, are usually written without diacritics. The absence of these marks constitute a real problem in the automatic processing of these data by NLP tools. Indeed, writing Arabic without diacritics introduces several types of ambiguity. Firstly, a word without diacratics could have many possible meanings depending on their diacritization. Secondly, undiacritized surface forms of an Arabic word might have as many as 200 readings depending on the complexity of its morphology . In fact, the agglutination property of Arabic might produce a problem that can only be resolved using diacritics. Thirdly, without diacritics a word could have many possible POS instead of one. This is the case with the words that have the same spelling and POS tag but a different lexical sense, or words that have the same spelling but different POS tags and lexical senses . Finally, there is ambiguity at the grammatical level (syntactic ambiguity). In this paper, we propose the first work that investigates the automatic diacritization of Tunisian Dialect texts. We first describe our annotation guidelines and procedure. Then, we propose two major models, namely a statistical machine translation (SMT) and a discriminative model as a sequence classification task based on CRFs (Conditional Random Fields). In the second approach, we integrate POS features to influence the generation of diacritics. Diacritics restoration was performed at both the word and the character levels. The results showed high scores of automatic diacritization based on the CRF system (WER 21.44% for CRF and WER 34.6% for SMT).
Statistical machine translation (SMT) models require large bilingual corpora to produce high quality results. Nevertheless, such large bilingual corpora are unavailable for almost language pairs. In this work, we enhance SMT for low-resource languages using semantic similarity. Specifically, we focus on two strategies: sentence alignment and pivot translation. For sentence alignment, we use the representative method that based on sentence length and word alignment as a baseline method. We utilize word2vec to extract word similarity from monolingual data to improve the word alignment phase in the baseline method. The proposed sentence alignment algorithm is used to build bilingual corpora from Wikipedia. In pivot translation, the representative method called triangulation connects source to target phrases via common pivot phrases in source-pivot and pivot-target phrase tables. Nevertheless, it may lack information when some pivot phrases contain the same meaning, but they are not matched to each other. Therefore, we use similarity between pivot phrases to improve the triangulation method. Finally, we introduce a framework that combines the two proposed algorithms to improve SMT for low-resource languages. We conduct experiments on low-resource languages including Japanese-Vietnamese and Southeast Asian languages (Indonesian, Malay, Filipino, and Vietnamese). Experimental results show that our proposed methods of sentence alignment and pivot translation based on semantic similarity improve the baseline methods. The proposed framework significantly improves baseline SMT models trained on small bilingual corpora.