ACM DL

ACM Transactions on

Asian and Low-Resource Language Information Processing (TALLIP)

Menu
Latest Articles

Optimizing Automatic Evaluation of Machine Translation with the ListMLE Approach

Automatic evaluation of machine translation is critical for the evaluation and development of machine translation systems. In this study, we propose a... (more)

Response Selection and Automatic Message-Response Expansion in Retrieval-Based QA Systems using Semantic Dependency Pair Model

This article presents an approach to response selection and message-response (MR) database expansion... (more)

Input Method for Human Translators: A Novel Approach to Integrate Machine Translation Effectively and Imperceptibly

Computer-aided translation (CAT) systems are the most popular tool for helping human translators efficiently perform language translation. To further improve the translation efficiency, there is an increasing interest in applying machine translation (MT) technology to upgrade CAT. To thoroughly integrate MT into CAT systems, in this article, we... (more)

Arabic Authorship Attribution: An Extensive Study on Twitter Posts

Law enforcement faces problems in tracing the true identity of offenders in cybercrime investigations. Most offenders mask their true identity, impersonate people of high authority, or use identity deception and obfuscation tactics to avoid detection and traceability. To address the problem of anonymity, authorship analysis is used to identify... (more)

Word Segmentation for Burmese Based on Dual-Layer CRFs

Burmese is an isolated language, in which the syllable is the smallest unit. Syllable segmentation methods based on matching lead to performance... (more)

NEWS

Science Citation Index Listing

TALLIP will be listed in the Science Citation Index Expanded starting with the first 2015 issue, 14(1). TALLIP will be included in the 2017 Journal Citation Report, and the first Impact Factor will be published mid-2018.

New Name, Expanded Scope

This page provides information about the journal Transactions on Asian and Low-Resource Language Information Processing (TALLIP), a publication of the Association for Computing Machinery (ACM).

The journal was formerly known as the Transactions on Asian Language Information Processing (TALIP): see the editorial charter for information on the expanded scope of the journal.  

Diacritic-Based Matching of Arabic Words

Words in Arabic consist of letters and short vowel symbols called diacritics inscribed atop regular letters. Changing diacritics may change the syntax and semantics of a word; turning it into another. This results in difficulties when comparing words based solely on string matching. Typically, Arabic NLP applications resort to morphological analysis to battle ambiguity originating from this and other challenges. In this paper, we introduce three alternative algorithms to compare two words with possibly different diacritics. We propose the Subsume knowledge-based algorithm, the Imply rule-based algorithm, and the Alike machine-learning based algorithm. We evaluated the soundness, completeness and accuracy of the algorithms against a large dataset of 86,886 word pairs. Our evaluation shows that the accuracy of Subsume (100%), Imply (99.32%), and Alike (99.53%). Although accurate, Subsume was able to judge only 75% of the data. Both Subsume and Imply are sound, while Alike is not. We demonstrate the utility of the algorithms using a real-life use case  in lemma disambiguation and in linking hundreds of Arabic dictionaries.

Sentiment Analysis of Iraqi Arabic Dialect on Facebook Based on Distributed Representations of Documents

Nowadays, social media is used by many people to express their opinions about a variety of topics. Opinion Mining or Sentiment Analysis techniques extract opinions from user generated contents. Over the years, a multitude of Sentiment Analysis studies has been done about the English language with deficiencies of research in all other languages. Unfortunately, Arabic is one of the languages that seems to lack substantial research, despite the rapid growth of its use on social media outlets. Furthermore, specific Arabic dialects should be studied, not just Modern Standard Arabic. In this paper, we experiment sentiments analysis of Arabic Iraqi dialect using word embedding. First, we made a large corpus from previous works to learn word representations. Second, we generated word embedding model by training corpus using Doc2Vec representations based on Paragraph and Distributed Memory Model of Paragraph Vectors (DM-PV) architectures. Lastly, the represented feature used for training four binary classifiers (Logistic Regression, Decision Tree, Support Vector Machine and Naive Bayes) to detect sentiment. We also experimented different values of parameters (window size, dimension and negative samples). In the light of the experiments, it can be concluded that our approach achieves a better performance for Logistic Regression and Support Vector Machine than the other classifiers.

Sub-stroke-wise Relative Feature for Online Indic Handwriting Recognition

The main problem of Bangla and Devanagari handwriting recognition is the shape similarity of characters. There are only a few pieces of work on author-independent cursive online Indian text recognition, and shape similarity problem needs more attention from researchers. To handle the shape similarity problem of cursive characters of Bangla and Devanagari scripts, in this paper, we propose a new category of features called sub-stroke-wise relative feature (SRF) which are based on relative information of the constituent parts of the handwritten strokes. Relative information among some of the parts within a character can be a distinctive feature as it scales up small dissimilarities and enhances discrimination among similar-looking shapes. Also, contextual anticipatory phenomena are automatically modeled by this type of feature, as it takes into account the influence of previous and forthcoming strokes. We have tested popular state-of-the-art feature sets as well as proposed SRF using various (up to 20,000-word) lexicons and noticed that SRF significantly outperforms the state-of-the-art feature sets for online Bangla and Devanagari cursive word recognition.

A Rule-based Kurdish Text Transliteration System

In this article, we present a rule-based approach for transliterating two mostly used orthographies in Sorani Kurdish. Our work consists of detecting each character in a word by removing the possible ambiguities and mapping it into the target orthography. We describe different challenges in Kurdish text mining and propose novel ideas concerning the transliteration task for Sorani Kurdish. Our transliteration system, named Wergor, achieves 82.79% overall precision and more than 99% in detecting the double-usage characters. We also present a manually transliterated corpus for Kurdish.

Online Handwritten Gurmukhi Words Recognition: An Inclusive Study

Identification of offline and online handwritten words is a challenging and complex task. In comparison to Latin and Oriental scripts, the research and study of handwriting recognition at word level in Indic scripts is at its initial phases. The global and analytical are two main methods of handwriting recognition. The present work introduces a novel analytical approach for online handwritten Gurmukhi words recognition based on minimal set of words and recognizes an input Gurmukhi word as a sequence of characters. We employed a sequential step by step approach to recognize online handwritten Gurmukhi words. Considering the massive variability in online Gurmukhi handwriting, the present work employs the completely linked non-homogeneous hidden Markov model. In the present study, we considered the dependent, major dependent and super dependent nature of strokes to form Gurmukhi characters in words. On test sets of online handwritten Gurmukhi datasets, the word level accuracy rates are 85.98%, 84.80%, 82.40% and 82.20% in four different modes. Besides the online Gurmukhi word recognition, the present work also provides Gurmukhi handwriting analysis study for varying writing styles, and proposes novel techniques for zone detection and rearrangement of strokes. Our proposed algorithms have been successfully employed to online handwritten Gurmukhi word recognition in dependent and independent modes of handwriting.

Wikipedia-based Relatedness Measurements for Multilingual Short Text Clustering

Throughout the world, people can post information about their local area in their own languages using social networking services. Multilingual short text clustering is an important task to organize such information and it can be applied to various applications, such as event detection and summarization. However, measuring the relatedness between short texts written in various languages is a challenging problem. In addition to handling multiple languages, the semantic gaps among all languages must be considered. In this paper, we propose two Wikipedia-based semantic relatedness measurement methods for multilingual short text clustering. The proposed methods solve the semantic gap problem by incorporating inter-language links of Wikipedia into Extended Naive Bayes (ENB), a probabilistic method that can be applied to measure semantic relatedness among monolingual short texts. The proposed methods represent a multilingual short text as a vector of the English version of Wikipedia articles (entities). By transferring texts to a unified vector space, the relatedness between texts in different languages with similar meanings can be increased. We also propose an approach that can improve clustering performance and reduce the processing time by eliminating language-specific entities in the unified vector space. Experimental results of multilingual Twitter message clustering revealed that the proposed methods outperformed cross-lingual explicit semantic analysis, a previously proposed method to measure relatedness between texts in different languages. Moreover, the proposed methods were comparable to ENB applied to texts translated into English using a proprietary translation service. The proposed methods enabled relatedness measurements for multilingual short text clustering without requiring machine translation processes.

Low-Resource Machine Transliteration Using Recurrent Neural Networks

Grapheme-to-phoneme models are key components in automatic speech recognition and text-to-speech systems. With low-resource language pairs that do not have available and well-developed pronunciation lexicons, grapheme-to-phoneme models are particularly useful. These models are based on initial alignments between grapheme source and phoneme target sequences. Inspired by sequence-to-sequence recurrent neural network-based translation methods, the current research presents an approach that applies an alignment representation for input sequences and pre-trained source and target embeddings to overcome the transliteration problem for a low-resource languages pair. Evaluation and experiments involving French and Vietnamese showed that with only a small bilingual pronunciation dictionary available for training the transliteration models, promising results were obtained with a large increase BLEU-scores and a reduction in translation error rate (TER) and phoneme error rate (PER). Moreover, we compared our proposed neural network-based transliteration approach with a statistical one.

Pause-based phrase extraction and effective OOV handling for low-resource machine translation systems

Machine translation is the core problem for several natural language processing research across the globe. However, building a translation system involving low-resource languages remains a challenge with respect to statistical machine translation (SMT). This work proposes and studies the effect of a phrase-induced hybrid machine translation system for translation from English-to-Tamil, under a low-resource setting, using a limited domain-specific parallel text corpus. Unlike conventional hybrid MT systems, the free-word ordering feature of the target language Tamil, is exploited to form a re-ordered target language model and to extend the parallel text corpus for training the SMT. In the current work, a novel rule-based phrase extraction method, implemented using parts-of-speech (POS) and place-of-pause (POP) in both the languages, is proposed which is used to pre-process the training corpus for developing the back-off phraseinduced SMT (PiSMT). Further, out-of-vocabulary (OOV) words are handled using speech-based transliteration and two-level thesaurus intersection techniques based on the parts-of-speech tag of the OOV word. In order to ensure that the input with OOV words does not skip phrase-level translation in the hierarchical model, a phrase-level example-based machine translation (PL-EBMT) approach is adopted to find the closest matching phrase and perform translation followed by OOV replacement. The proposed system results in a bilingual evaluation understudy (BLEU) score of 80.21 and a translation edit rate (TER) of 20.18. The performance of the system is compared in terms of adequacy and fluency, with existing translation systems for this specific language pair and it is observed that the proposed system outperforms its counterparts.

"UTTAM": An Efficient Spelling Correction System for Hindi Language Based on Supervised Learning

Improving NER Tagging Performance in Low-Resource Languages via Multilingual Learning

Existing supervised solutions for Named Entity Recognition (NER) typically rely on large annotated corpus. Collecting large amounts of annotated corpus is time consuming and requires considerable human effort. However, collecting small amounts of NER annotated corpus for any language is feasible. But, the performance may degrade due to data sparsity. We address data sparsity by borrowing features from the data of a closely-related language. We use hierarchical neural networks to train a supervised NER system and the feature borrowing happens via sharing of the layers of the network across languages. The neural network is trained on the combined dataset of the involved languages, also termed as Multilingual Learning. Unlike existing systems, we share all layers of the network across languages. In our experiments, sharing all layers of network has been empirically observed to obtain better NER tagging performance for Indian languages. By multilingual learning, we show that the low-resource language NER performance increases mainly due to (a) increased named entity vocabulary (b) cross-lingual sub-word features and (c) multilingual learning playing the role of regularization.

Incorporating Multi-level User Preference into Document-level Sentiment Classification

Document-level sentiment classification aims to predict user's sentiment polarity in a document about a product. Most of existing methods only focus on review contents and ignore users who post reviews. In fact, when reviewing a product, different users have different word-using habits to express opinions (i.e., word-level user preference), care different attributes of the product (i.e., aspect-level user preference) and have different characteristics to score the review (i.e., polarity-level user preference). These preferences have great influences on interpreting the sentiment of text. To address this issue, we propose a model called Hierarchical User Attention Network (HUAN), which incorporates multi-level user preference into a hierarchical neural network to perform document-level sentiment classification. Specifically, HUAN encodes different kinds of information (word, sentence, aspect and document) in a hierarchical structure and imports user embedding and user attention mechanism to model these preferences. Empirical results on two real-world datasets show that HUAN achieves state-of-the-art performances. Furthermore, HUAN can also mine important attributes of products for different users.

On the Usage of a Classical Arabic Corpus as a Language Resource: Related Research and Key Challenges

This paper presents a literature review of computer science related works applied on hadith, a kind of Arabic narrations which appeared in the 7th century. We study and compare existent works in several fields of Natural Language Processing (NLP), Information Retrieval (IR) and Knowledge Extraction (KE). Thus, we illicit the main drawbacks of existent works and identify some research issues, which may be considered by the research community. We also study the characteristics of this type of documents, by enumerating the advantages/limits of using hadith as a language resource. Moreover, our study shows that existent works used different collections of hadiths, thus making hard to compare objectively their results. Besides, many preprocessing steps are recurrent through these applications, thus wasting a lot of time. Consequently, the key issues for building generic language resources from hadiths are discussed, taking into account the relevance of related works and the wide community of researchers which are interested in. The ultimate goal is to structure hadith books for multiple usages, thus building common collections which may be exploited in future applications.

NOVA: A Feasible and Flexible Annotation System for Joint Tokenization and Part-of-Speech Tagging

A feasible and flexible annotation system is designed for joint tokenization and part-of-speech (POS) tagging to annotate those languages without natural definition of words. This design was motivated by the fact that word separators are not used in many highly analytic East and Southeast Asian languages. Although several of the languages are well-studied, e.g., Chinese and Japanese, many are understudied and with low resource, e.g., Burmese (Myanmar) and Khmer. In the first part of the paper, the proposed annotation system, named nova, is introduced. nova contains only four basic tags (n, v, a, and o) while these tags can be further modified and combined to adapt complex linguistic phenomena in tokeniztion and POS tagging. In the second part of the paper, the application of nova is discussed, with practical examples on Burmese and Khmer, where the feasibility and flexibility of nova are demonstrated. The relation between nova and two universal POS tagsets is discussed in the final part of the paper.

Improving Word Embedding Coverage in Less-resource Language through Multi-linguality and Cross-linguality: A Case Study with Aspect based Sentiment Analysis

Efficient word representations play an important role in solving various problems related to Natural Language Processing (NLP), data mining, text mining etc. The issue of data sparsity poses a great challenge in creating efficient word representation model for solving the underlying problem. The problem is more intensified with resource-poor languages due to the absence of sufficient amount of corpus. In this work we propose to minimize the effect of data sparsity by leveraging bilingual word embeddings learned through a parallel corpus. We train and evaluate deep Long Short Term Memory (LSTM) based architecture and show the effectiveness of the proposed approach for two aspect level sentiment analysis tasks i.e. aspect term extraction and sentiment classification. The neural network architecture is further assisted by the hand-crafted features for prediction. We apply the proposed model in two experimental setups, viz. multi-lingual and cross-lingual. Experimental results show effectiveness of the proposed approach against the state-of-the-art methods.

Tempo-HindiWordNet: A Lexical Knowledge-base for Temporal Information Processing

Temporality has significantly contributed to the various Natural Language Processing and Information Retrieval applications. In this paper, we first create a lexical knowledge-base in Hindi by identifying the temporal orientation of word senses based on their definition and then use this resource to detect underlying temporal orientation of the sentences. In order to create the resource, we propose a semi-supervised learn- ing framework, where each synset of the Hindi WordNet is classified into one of the five categories, namely past, present, future, neutral and atemporal. The algorithm initiates learning with a set of seed synsets and then iterates following different expansion strategies, viz. probabilistic expansion based on classifier?s confidence and semantic distance based measures. We manifest the usefulness of the resource that we build on an external task, viz. sentence-level temporal classification. The underlying idea is that a temporal knowledge- base can help in classifying the sentences according to their inherent temporal properties. Experiments on two different domains, viz. General and Twi er show very interesting results.

Co-occurrence Weight Selection in Generation of Word Embeddings for Low Resource Languages

This study aims to increase the performance of word embeddings in analogy and similarity tasks by proposing a new weighting scheme for the co-occurrence counting. The idea behind this new family of weights is to overcome the disadvantage of distant appearing word pairs, which are indeed semantically close, while representing them in the co-occurrence counting. For high resource languages this disadvantage might not be effective due to high frequency of co-occurrence. However, when there is not enough available resource, such pairs suffer from being distant. In order to favour such pairs, a polynomial weighting scheme is proposed to shift the weights up for distant words, whereas the weighting of nearby words is left nearly unchanged. The parameter optimization for new weights and the effects of the weighting scheme are analysed for English, Italian and Turkish languages. A small portion of English resources and a quarter of Italian resources are utilized for demonstration purposes as if these languages are low resource languages. Performance increase is observed in analogy tests when the proposed weighting scheme is applied to relatively small corpora (i.e. mimicking low resource languages) of both English and Italian. In order to show the effectiveness of the proposed scheme in small corpora, it is also shown for a large English corpus that the performance of the proposed weighting scheme cannot outperform the original weights. Since Turkish is relatively a low resources language, it is demonstrated that the proposed weighting scheme can increase the performance of both analogy and similarity tests when all Turkish Wikipedia pages are utilized as corpus.

Multitask Pointer Network for Korean Dependency Parsing

Dependency parsing is a fundamental problem in natural language processing. We introduce a novel dependancy parsing framework called head pointing based dependancy parsing. In this framework, we cast Korean dependency parsing problem to a statistical head pointing and arc labeling problem. To address the problem, a novel neural network called Multitask Pointer Networks is devised for a neural sequential head pointing and type labeling architecture. Our approach does not require any hand-crafting features or language-specific rules to parse dependency. Furthermore it shows state-of-the-art performance in Korean dependency parsing.

Transition-Based Korean Dependency Parsing Using Hybrid Word Representations of Syllables and Morphemes with LSTMs

Recently, neural approaches for transition based dependency parsing have become one of the state-of-the art methods for performing dependency parsing tasks in many languages. In neural transition-based parsing, a parser state representation is first computed from the configuration of a stack and a buffer, which is then fed into a feed-forward neural network model that predicts a next transition action. Since words are basic elements of a stack and buffer, a parser state representation is largely affected by how a word representation is defined. Specifically, word representation issues become more severe in morphologically rich languages such as Korean, as a set of possible words is not restricted but rather nearly unlimited due to its agglutinative characteristics. In this paper, we propose a hybrid word representation which combines two compositional word representations, each of which is derived from representations of syllables and morphemes, respectively. Our underlying assumption for this hybrid word representation is that because both syllables and morphemes are two common ways of decomposing Korean words, it is expected that their effects in inducing word representation are complementary to one another. Experimental results carried on Sejong and SPMRL 2014 datasets show that our proposed hybrid word representation leads to the state of the art performance.

Unsupervised Joint PoS Tagging and Stemming for Agglutinative Languages

The number of possible word forms is theoretically infinite in agglutinative languages. This brings the out-of-vocabulary (OOV) issue for part-of-speech (PoS) tagging in agglutinative languages. Since the inflectional morphology does not change the PoS tag of a word, we propose to learn stems along with PoS tags simultaneously. Therefore, we aim to overcome the sparsity problem by reducing the word forms into their stems. We adopt a Bayesian model that is fully unsupervised. We build a Hidden Markov Model for PoS tagging where the stems are emitted through hidden states. Several versions of the model are introduced in order to observe the effects of the different dependencies throughout the corpus; such as the dependency between stems and PoS tags or the dependency between PoS tags and affixes. Additionally, we use neural word embeddings to estimate the semantic similarity between the word form and the stem. We use the semantic similarity as prior information to discover the actual stem of a word since the inflection does not change the meaning of a word. We compare our models with other unsupervised stemming and PoS tagging models on Turkish, Hungarian, Finnish, Basque, and English. The results show that a joint model for PoS tagging and stemming improves upon an independent PoS tagger and stemmer in agglutinative languages.

Tempo-HindiWordNet: A Lexical Knowledge-base for Temporal Information Processing

In this paper we propose an efficient sentence-level temporal classifier for tagging sentences of Hindi documents with time senses. In order to achieve this goal, we need to determine the temporal sense of each word in the sentence.We propose a semi-supervised learning framework, where each synset of the HindiWordNet is classified into five temporal dimensions,namely past,present,future,neutral and atemporal. The algorithm initiates learning with a set of seed entities and then iterates following different expansion strategies,viz. probabilisticexpansionbasedonclassifiersconfidenceandsemanticdistancebasedmeasures. We use different representation methods, varying from simple word uni-grams of HindiWordNet glosses to word embeddings created from the glosses of synsets and other HindiWordNet relations. The resource, thus created is used for tagging sentences with past, present and future temporal senses. we develop two approaches based on machine learning and rules. Evaluation on two different domains, viz. newswire and tweets show encouraging performance.

All ACM Journals | See Full Journal Index

Search TALLIP
enter search term and/or author name