Authors: Yohanes Sigit Purnomo W.P., Yogan Jaya Kumar, Nur Zareen Zulkarnain, Basit Raza
Language: English
Abstract:
News articles are usually written by journalists based on statements taken from interviews with public figures. Attribution from such statements provides important information and it can be extracted from news articles to build a knowledge base by developing a sequential tagging scheme such as entity recognition. This research applies two deep learning architectures: recurrent neural networks-based and transformer-based, to establish public figures statement attribution and extraction models in the Indonesian Language. The experiments are conducted using five deep-learning model architectures with two different corpus sizes to investigate the impact of corpus size on each model’s performance. The experiments show that the best model for the RNN-based architecture is PFSA-ID-BLWCA which achieves 81.34 % F1 score, and the best model for the transformer-based is PFSA-ID-TWCA which obtains 81.01 % F1 score. This research also discovers that the size of the corpus influences the model performances. Furthermore, the study lays a foundation to overcome the attribution extraction in another language, especially low-resource languages, with some necessary adjustments.
Keywords: Statement extraction and attribution, Named-entity recognition, Knowledge-based, Indonesian language, Deep learning
DOI: 10.1016/j.knosys.2024.111558
GITHUB REPOSITORY: https://github.com/sigit-purnomo/pfsa-id
If you extend or use this work, please cite the paper where it was introduced:
@article{PURNOMOWP2024111558,
title = {Extraction and attribution of public figures statements for journalism in Indonesia using deep learning},
journal = {Knowledge-Based Systems},
volume = {289},
pages = {111558},
year = {2024},
issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2024.111558},
url = {https://www.sciencedirect.com/science/article/pii/S095070512400193X},
author = {Yohanes Sigit {Purnomo W.P.} and Yogan Jaya Kumar and Nur Zareen Zulkarnain and Basit Raza},
keywords = {Statement extraction and attribution, Named-entity recognition, Knowledge-based, Indonesian language, Deep learning},
abstract = {News articles are usually written by journalists based on statements taken from interviews with public figures. Attribution from such statements provides important information and it can be extracted from news articles to build a knowledge base by developing a sequential tagging scheme such as entity recognition. This research applies two deep learning architectures: recurrent neural networks-based and transformer-based, to establish public figures statement attribution and extraction models in the Indonesian Language. The experiments are conducted using five deep-learning model architectures with two different corpus sizes to investigate the impact of corpus size on each model's performance. The experiments show that the best model for the RNN-based architecture is PFSA-ID-BLWCA which achieves 81.34 % F1 score, and the best model for the transformer-based is PFSA-ID-TWCA which obtains 81.01 % F1 score. This research also discovers that the size of the corpus influences the model performances. Furthermore, the study lays a foundation to overcome the attribution extraction in another language, especially low-resource languages, with some necessary adjustments.}
}
Authors: Yohanes Sigit Purnomo W.P., Yogan Jaya Kumar, Nur Zareen Zulkarnain
Language: English
Abstract:
Purpose
By far, the corpus for the quotation extraction and quotation attribution tasks in Indonesian is still limited in quantity and depth. This study aims to develop an Indonesian corpus of public figure statements attributions and a baseline model for attribution extraction, so it will contribute to fostering research in information extraction for the Indonesian language.
Design/methodology/approach
The methodology is divided into corpus development and extraction model development. During corpus development, data were collected and annotated. The development of the extraction model entails feature extraction, the definition of the model architecture, parameter selection and configuration, model training and evaluation, as well as model selection.
Findings
The Indonesian corpus of public figure statements attribution achieved 90.06% agreement level between the annotator and experts and could serve as a gold standard corpus. Furthermore, the baseline model predicted most labels and achieved 82.026% F-score.
Originality/value
To the best of the authors’ knowledge, the resulting corpus is the first corpus for attribution of public figures’ statements in the Indonesian language, which makes it a significant step for research on attribution extraction in the language. The resulting corpus and the baseline model can be used as a benchmark for further research. Other researchers could follow the methods presented in this paper to develop a new corpus and baseline model for other languages.
Keywords: Indonesian corpus, Public figures, Statement attribution, News article, Baseline model, Named entity recognition
DOI: 10.1108/GKMC-04-2022-0091
GITHUB REPOSITORY: https://github.com/sigit-purnomo/pfsa-id
If you extend or use this work, please cite the paper where it was introduced:
@article{PURNOMOWP2022,
title = {PFSA-ID: an annotated Indonesian corpus and baseline model of public figures statements attributions},
journal = {Global Knowledge, Memory and Communication},
volume = {ahead-of-print},
pages = {ahead-of-print},
year = {2022},
issn = {2514-9342},
doi = {https://doi.org/10.1108/GKMC-04-2022-0091},
url = {https://www.emerald.com/insight/content/doi/10.1108/GKMC-04-2022-0091/full/html},
author = {Yohanes Sigit {Purnomo W.P.} and Yogan Jaya Kumar and Nur Zareen Zulkarnain},
keywords = {Indonesian corpus, Public figures, Statement attribution, News article, Baseline model, Named entity recognition},
abstract = {Purpose By far, the corpus for the quotation extraction and quotation attribution tasks in Indonesian is still limited in quantity and depth. This study aims to develop an Indonesian corpus of public figure statements attributions and a baseline model for attribution extraction, so it will contribute to fostering research in information extraction for the Indonesian language. Design/methodology/approach The methodology is divided into corpus development and extraction model development. During corpus development, data were collected and annotated. The development of the extraction model entails feature extraction, the definition of the model architecture, parameter selection and configuration, model training and evaluation, as well as model selection. Findings The Indonesian corpus of public figure statements attribution achieved 90.06% agreement level between the annotator and experts and could serve as a gold standard corpus. Furthermore, the baseline model predicted most labels and achieved 82.026% F-score. Originality/value To the best of the authors’ knowledge, the resulting corpus is the first corpus for attribution of public figures’ statements in the Indonesian language, which makes it a significant step for research on attribution extraction in the language. The resulting corpus and the baseline model can be used as a benchmark for further research. Other researchers could follow the methods presented in this paper to develop a new corpus and baseline model for other languages.}
}
Authors: Yohanes Sigit Purnomo W.P., Yogan Jaya Kumar, Nur Zareen Zulkarnain
Language: English
Abstract:
Purpose
Extracting information from unstructured data becomes a challenging task for computational linguistics. Public figure’s statement attributed by journalists in a story is one type of information that can be processed into structured data. Therefore, having the knowledge base about this data will be very beneficial for further use, such as for opinion mining, claim detection and fact-checking. This study aims to understand statement extraction tasks and the models that have already been applied to formulate a framework for further study.
Design/methodology/approach
This paper presents a literature review from selected previous research that specifically addresses the topics of quotation extraction and quotation attribution. Research works that discuss corpus development related to quotation extraction and quotation attribution are also considered. The findings of the review will be used as a basis for proposing a framework to direct further research.
Findings
There are three findings in this study. Firstly, the extraction process still consists of two main tasks, namely, the extraction of quotations and the attribution of quotations. Secondly, most extraction algorithms rely on a rule-based algorithm or traditional machine learning. And last, the availability of corpus, which is limited in quantity and depth. Based on these findings, a statement extraction framework for Indonesian language corpus and model development is proposed.
Originality/value
The paper serves as a guideline to formulate a framework for statement extraction based on the findings from the literature study. The proposed framework includes a corpus development in the Indonesian language and a model for public figure statement extraction. Furthermore, this study could be used as a reference to produce a similar framework for other languages.
Keywords: Journalism, Online News, Corpus Development, Indonesian Language, Quotation Extraction, Quotation Attribution, Statement Extraction
DOI: 10.1108/GKMC-07-2020-0098
If you extend or use this work, please cite the paper where it was introduced:
@article{PURNOMOWP2020,
title = {Understanding quotation extraction and attribution: towards automatic extraction of public figure’s statements for journalism in Indonesia},
journal = {Global Knowledge, Memory and Communication},
volume = {70},
pages = {655-671},
year = {2020},
issn = {2514-9342},
doi = {https://doi.org/10.1108/GKMC-07-2020-0098},
url = {https://www.emerald.com/insight/content/doi/10.1108/GKMC-07-2020-0098/full/html},
author = {Yohanes Sigit {Purnomo W.P.} and Yogan Jaya Kumar and Nur Zareen Zulkarnain},
keywords = {Journalism, Online News, Corpus Development, Indonesian Language, Quotation Extraction, Quotation Attribution, Statement Extraction},
abstract = {Purpose. Extracting information from unstructured data becomes a challenging task for computational linguistics. Public figure’s statement attributed by journalists in a story is one type of information that can be processed into structured data. Therefore, having the knowledge base about this data will be very beneficial for further use, such as for opinion mining, claim detection and fact-checking. This study aims to understand statement extraction tasks and the models that have already been applied to formulate a framework for further study. Design/methodology/approach. This paper presents a literature review from selected previous research that specifically addresses the topics of quotation extraction and quotation attribution. Research works that discuss corpus development related to quotation extraction and quotation attribution are also considered. The findings of the review will be used as a basis for proposing a framework to direct further research. Findings. There are three findings in this study. Firstly, the extraction process still consists of two main tasks, namely, the extraction of quotations and the attribution of quotations. Secondly, most extraction algorithms rely on a rule-based algorithm or traditional machine learning. And last, the availability of corpus, which is limited in quantity and depth. Based on these findings, a statement extraction framework for Indonesian language corpus and model development is proposed. Originality/value. The paper serves as a guideline to formulate a framework for statement extraction based on the findings from the literature study. The proposed framework includes a corpus development in the Indonesian language and a model for public figure statement extraction. Furthermore, this study could be used as a reference to produce a similar framework for other languages.}
}
Last updated: September 14, 2020
Source : ArXiv
No. | Year | Title | URL |
---|---|---|---|
1 | 2020 | An Effective Transition-based Model for Discontinuous NER | View |
2 | 2020 | FLAT: Chinese NER Using Flat-Lattice Transformer | View |
3 | 2020 | TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition | View |
4 | 2020 | One Model to Recognize Them All: Marginal Distillation from NER Models with Different Tag Sets | View |
5 | 2020 | NSURL-2019 Task 7: Named Entity Recognition (NER) in Farsi | View |
6 | 2020 | Beheshti-NER: Persian Named Entity Recognition Using BERT | View |
7 | 2020 | Healthcare NER Models Using Language Model Pretraining | View |
8 | 2020 | MT-BioNER: Multi-task Learning for Biomedical Named Entity Recognition using Deep Bidirectional Transformers | View |
9 | 2020 | CLUENER2020: Fine-grained Named Entity Recognition Dataset and Benchmark for Chinese | View |
10 | 2020 | CLUENER2020: Fine-grained Name Entity Recognition for Chinese | View |
11 | 2020 | Computationally Efficient NER Taggers with Combined Embeddings and Constrained Decoding | View |
12 | 2019 | TENER: Adapting Transformer Encoder for Named Entity Recognition | View |
13 | 2019 | HAMNER: Headword Amplified Multi-span Distantly Supervised Method for Domain Specific Named Entity Recognition | View |
14 | 2019 | TENER: Adapting Transformer Encoder for Name Entity Recognition | View |
15 | 2019 | Porous Lattice-based Transformer Encoder for Chinese NER | View |
16 | 2019 | NER Models Using Pre-training and Transfer Learning for Healthcare | View |
17 | 2019 | Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels | View |
18 | 2019 | Entity Projection via Machine-Translation for Cross-Lingual NER | View |
19 | 2019 | Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER | View |
20 | 2019 | Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER | View |
21 | 2019 | Simplify the Usage of Lexicon in Chinese NER | View |
22 | 2019 | FlexNER: A Flexible LSTM-CNN Stack Framework for Named Entity Recognition | View |
23 | 2019 | Cross-Lingual Transfer for Distantly Supervised and Low-resources Indonesian NER | View |
24 | 2019 | Using Similarity Measures to Select Pretraining Data for NER | View |
25 | 2019 | Entity Recognition at First Sight: Improving NER with Eye Movement Information | View |
26 | 2019 | Revised JNLPBA Corpus: A Revised Version of Biomedical NER Corpus for Relation Extraction Task | View |
27 | 2018 | Exploring the importance of context and embeddings in neural NER models for task-oriented dialogue systems | View |
28 | 2018 | microNER: A Micro-Service for German Named Entity Recognition based on BiLSTM-CRF | View |
29 | 2018 | pioNER: Datasets and Baselines for Armenian Named Entity Recognition | View |
30 | 2018 | Cross Script Hindi English NER Corpus from Wikipedia | View |
31 | 2018 | SlugNERDS: A Named Entity Recognition Tool for Open Domain Dialogue Systems | View |
32 | 2018 | Chinese NER Using Lattice LSTM | View |
33 | 2018 | A Feature-Based Model for Nested Named-Entity Recognition at VLSP-2018 NER Evaluation Campaign | View |
34 | 2018 | CliNER 2 | View |
35 | 2018 | Adversarial Learning for Chinese NER from Crowd Annotations | View |
36 | 2017 | Synapse at CAp 2017 NER challenge: Fasttext CRF | View |
37 | 2017 | NeuroNER: an easy-to-use program for named-entity recognition based on neural networks | View |
Last updated: September 04, 2020
Source : ArXiv
No. | Year | Title | URL |
---|---|---|---|
1 | 2020 | Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets | View |
2 | 2019 | SemEval-2014 Task 9: Sentiment Analysis in Twitter | View |
3 | 2019 | SemEval-2015 Task 10: Sentiment Analysis in Twitter | View |
4 | 2019 | SemEval-2016 Task 4: Sentiment Analysis in Twitter | View |
5 | 2019 | SemEval-2017 Task 4: Sentiment Analysis in Twitter | View |
6 | 2019 | Sentiment Analysis of German Twitter | View |
7 | 2019 | Sentiment Analysis for Arabic in Social Media Network: A Systematic Mapping Study | View |
8 | 2019 | Sentiment Analysis of Typhoon Related Tweets using Standard and Bidirectional Recurrent Neural Networks | View |
9 | 2019 | Sentiment Analysis at SEPLN (TASS)-2019: Sentiment Analysis at Tweet level using Deep Learning | View |
10 | 2019 | Twitter Sentiment Analysis using Distributed Word and Sentence Representation | View |
11 | 2019 | Sentiment Analysis on IMDB Movie Comments and Twitter Data by Machine Learning and Vector Space Techniques | View |
12 | 2019 | Combination of Domain Knowledge and Deep Learning for Sentiment Analysis of Short and Informal Messages on Social Media | View |
13 | 2018 | Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM | View |
14 | 2018 | Twitter Sentiment Analysis System | View |
15 | 2018 | A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter | View |
16 | 2018 | Sentiment Analysis of Arabic Tweets: Feature Engineering and A Hybrid Approach | View |
17 | 2018 | JU_KSSAIL_CodeMixed-2017: Sentiment Analysis for Indian Code Mixed Social Media Texts | View |
18 | 2018 | Preparation of Improved Turkish DataSet for Sentiment Analysis in Social Media | View |
19 | 2017 | Improved Twitter Sentiment Analysis Using Naive Bayes and Custom Language Model | View |
20 | 2017 | RETUYT in TASS 2017: Sentiment Analysis for Spanish Tweets using SVM and CNN | View |
21 | 2017 | Semantic Sentiment Analysis of Twitter Data | View |
22 | 2017 | Multitask Learning for Fine-Grained Twitter Sentiment Analysis | View |
23 | 2017 | BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs | View |
24 | 2017 | NILC-USP at SemEval-2017 Task 4: A Multi-view Ensemble for Twitter Sentiment Analysis | View |
25 | 2016 | Sentiment Analysis for Twitter : Going Beyond Tweet Text | View |
26 | 2016 | Sentiment Analysis of Twitter Data for Predicting Stock Market Movements | View |
27 | 2016 | Sentiment Analysis of Twitter Data: A Survey of Techniques | View |
Last updated: August 22, 2020
Source : ArXiv
No. | Year | Title | URL | |
---|---|---|---|---|
1 | 2020 | Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation | View | |
2 | 2020 | Multilingual Machine Translation: Closing the Gap between Shared and Language-specific Encoder-Decoders | View | |
3 | 2020 | Using Interlinear Glosses as Pivot in Low-Resource Multilingual Machine Translation | View | |
4 | 2020 | Multilingual Denoising Pre-training for Neural Machine Translation | View | |
5 | 2020 | A Comprehensive Survey of Multilingual Neural Machine Translation | View | |
6 | 2020 | A Brief Survey of Multilingual Neural Machine Translation | View | |
7 | 2019 | A Study of Multilingual Neural Machine Translation | View | |
8 | 2019 | Adapting Multilingual Neural Machine Translation to Unseen Languages | View | |
9 | 2019 | Multilingual Neural Machine Translation with Language Clustering | View | |
10 | 2019 | Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation | View | |
11 | 2019 | A Survey of Multilingual Neural Machine Translation | View | |
12 | 2019 | Massively Multilingual Neural Machine Translation | View | |
13 | 2019 | Multilingual Neural Machine Translation with Knowledge Distillation | View | |
14 | 2019 | Multilingual Neural Machine Translation With Soft Decoupled Encoding | View | |
15 | 2018 | Transfer Learning in Multilingual Neural Machine Translation with Dynamic Vocabulary | View | |
16 | 2018 | Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages | View | |
17 | 2018 | Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation | View | |
18 | 2018 | Paraphrases as Foreign Languages in Multilingual Neural Machine Translation | View | |
19 | 2018 | A Comparison of Transformer and Recurrent Neural Networks on Multilingual Neural Machine Translation | View | |
20 | 2018 | Multilingual Neural Machine Translation with Task-Specific Attention | View | |
21 | 2018 | Semantic Relatedness for All (Languages): A Comparative Analysis of Multilingual Semantic Relatedness Using Machine Translation | View | |
22 | 2018 | Bootstrapping Multilingual Intent Models via Machine Translation for Dialog Automation | View | |
23 | 2017 | Learning Joint Multilingual Sentence Representations with Neural Machine Translation | View | |
24 | 2016 | Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder | View | |
25 | 2016 | Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation | View |
Last updated: August 14, 2020
Source : ArXiv
No. | Year | Title | URL |
---|---|---|---|
1 | 2020 | Visual Question Answering Using Semantic Information from Image Descriptions | View |
2 | 2020 | Understanding Knowledge Gaps in Visual Question Answering: Implications for Gap Identification and Testing | View |
3 | 2020 | Generating Rationales in Visual Question Answering | View |
4 | 2020 | PathVQA: 30000+ Questions for Medical Visual Question Answering | View |
5 | 2020 | RUBi: Reducing Unimodal Biases in Visual Question Answering | View |
6 | 2020 | VQA-LOL: Visual Question Answering under the Lens of Logic | View |
7 | 2020 | Component Analysis for Visual Question Answering Architectures | View |
8 | 2020 | Augmenting Visual Question Answering with Semantic Frame Information in a Multitask Learning Approach | View |
9 | 2020 | Robust Explanations for Visual Question Answering | View |
10 | 2020 | Generating Question Relevant Captions to Aid Visual Question Answering | View |
11 | 2019 | Assessing the Robustness of Visual Question Answering | View |
12 | 2019 | Self-Critical Reasoning for Robust Visual Question Answering | View |
13 | 2019 | Learning Sparse Mixture of Experts for Visual Question Answering | View |
14 | 2019 | Inverse Visual Question Answering with Multi-Level Attentions | View |
15 | 2019 | Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic Labels Improve Image Captioning and Visual Question Answering | View |
16 | 2019 | VideoNavQA: Bridging the Gap between Visual and Embodied Question Answering | View |
17 | 2019 | Fusion of Detected Objects in Text for Visual Question Answering | View |
18 | 2019 | An Empirical Study on Leveraging Scene Graphs for Visual Question Answering | View |
19 | 2019 | A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data | View |
20 | 2019 | Quantifying and Alleviating the Language Prior Problem in Visual Question Answering | View |
21 | 2019 | GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering | View |
22 | 2019 | Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention | View |
23 | 2018 | Textually Enriched Neural Module Networks for Visual Question Answering | View |
24 | 2018 | Faithful Multimodal Explanation for Visual Question Answering | View |
25 | 2018 | Question-Guided Hybrid Convolution for Visual Question Answering | View |
26 | 2018 | Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining | View |
27 | 2018 | Learning Visual Question Answering by Bootstrapping Hard Attention | View |
28 | 2018 | Question Relevance in Visual Question Answering | View |
29 | 2018 | Learning Visual Knowledge Memory Networks for Visual Question Answering | View |
30 | 2018 | Think Visually: Question Answering through Virtual Imagery | View |
31 | 2018 | R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering | View |
32 | 2018 | Reciprocal Attention Fusion for Visual Question Answering | View |
33 | 2018 | Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering | View |
34 | 2018 | Attention on Attention: Architectures for Visual Question Answering (VQA) | View |
35 | 2018 | Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question Answering | View |
36 | 2018 | Learning to Count Objects in Natural Images for Visual Question Answering | View |
37 | 2018 | Dual Recurrent Attention Units for Visual Question Answering | View |
38 | 2017 | Interpretable Counting for Visual Question Answering | View |
39 | 2017 | Don’t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering | View |
40 | 2017 | Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge | View |
41 | 2017 | MemexQA: Visual Memex Question Answering | View |
42 | 2017 | Visual Question Answering with Memory-Augmented Networks | View |
43 | 2017 | Learning Convolutional Text Representations for Visual Question Answering | View |
44 | 2017 | Survey of Visual Question Answering: Datasets and Techniques | View |
45 | 2017 | Speech-Based Visual Question Answering | View |
46 | 2017 | The Promise of Premise: Harnessing Question Premises in Visual Question Answering | View |
47 | 2017 | C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1 | View |
48 | 2017 | Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets | View |
49 | 2017 | An Analysis of Visual Question Answering Algorithms | View |
50 | 2017 | Recurrent and Contextual Models for Visual Question Answering | View |
51 | 2017 | VQABQ: Visual Question Answering by Basic Questions | View |
52 | 2017 | Task-driven Visual Saliency and Attention-based Visual Question Answering | View |
53 | 2016 | VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question Answering | View |
54 | 2016 | Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering | View |
55 | 2016 | Zero-Shot Visual Question Answering | View |
56 | 2016 | Hierarchical Question-Image Co-Attention for Visual Question Answering | View |
57 | 2016 | Proposing Plausible Answers for Open-ended Visual Question Answering | View |
58 | 2016 | Visual Question Answering: Datasets, Algorithms, and Future Challenges | View |
59 | 2016 | The Color of the Cat is Gray: 1 Million Full-Sentences Visual Question Answering (FSVQA) | View |
60 | 2016 | Graph-Structured Representations for Visual Question Answering | View |
61 | 2016 | Measuring Machine Intelligence Through Visual Question Answering | View |
62 | 2016 | Interpreting Visual Question Answering Models | View |
63 | 2016 | Analyzing the Behavior of Visual Question Answering Models | View |
64 | 2016 | Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? | View |
65 | 2016 | Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding | View |
66 | 2016 | Hierarchical Co-Attention for Visual Question Answering | View |
67 | 2016 | Ask Your Neurons: A Deep Learning Approach to Visual Question Answering | View |
68 | 2016 | A Focused Dynamic Attention Model for Visual Question Answering | View |
69 | 2016 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | View |
70 | 2016 | VQA: Visual Question Answering | View |
71 | 2016 | Dynamic Memory Networks for Visual and Textual Question Answering | View |
Last updated: August 07, 2020
Source : ArXiv
No. | Year | Title | URL |
---|---|---|---|
1 | 2020 | Sentiment Analysis Using Simplified Long Short-term Memory Recurrent Neural Networks | View |
2 | 2020 | Would You Like Sashimi Even If It’s Sliced Too Thin? Selective Neural Attention for Aspect Targeted Sentiment Analysis (SNAT) | View |
3 | 2019 | Learning Robust Heterogeneous Signal Features from Parallel Neural Network for Audio Sentiment Analysis | View |
4 | 2019 | Investigating the Effect of Segmentation Methods on Neural Model based Sentiment Analysis on Informal Short Texts in Turkish | View |
5 | 2018 | Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers | View |
6 | 2018 | Code-Mixed Sentiment Analysis Using Machine Learning and Neural Network Approaches | View |
7 | 2018 | Exploiting Effective Representations for Chinese Sentiment Analysis Using a Multi-Channel Convolutional Neural Network | View |
8 | 2018 | Combining Convolution and Recursive Neural Networks for Sentiment Analysis | View |
9 | 2017 | Visual and Textual Sentiment Analysis Using Deep Fusion Convolutional Neural Networks | View |
10 | 2017 | On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis | View |
11 | 2017 | Explaining Recurrent Neural Network Predictions in Sentiment Analysis | View |
Last updated: July 25, 2020
Source : Github
No. | Year | Title | Github |
---|---|---|---|
1 | 2019 | subword-nmt - Unsupervised Word Segmentation for Neural Machine Translation and Text Generation | View |
2 | 2018 | monoses - Unsupervised Statistical Machine Translation | View |
3 | 2018 | UnsupervisedMT - Phrase-Based & Neural Unsupervised Machine Translation | View |
4 | 2018 | nmt-keras - Neural Machine Translation with Keras (Theano/Tensorflow) | View |
5 | 2018 | undreamt - Unsupervised Neural Machine Translation | View |
6 | 2017 | nmtpytorch - Neural Machine Translation Framework in PyTorch | View |
7 | 2017 | seq2seq - Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch | View |
8 | 2017 | OpenNMT-tf - Open Source Neural Machine Translation in TensorFlow | View |
9 | 2017 | nmt - TensorFlow Neural Machine Translation Tutorial | View |
10 | 2017 | sockeye - Sequence-to-sequence framework with MXNet with a focus on Neural Machine Translation | View |
11 | 2017 | bytenet_translation - A TensorFlow Implementation of Machine Translation In Neural Machine Tra | View |
12 | 2017 | OpenNMT-py - Open-Source Neural Machine Translation in PyTorch | View |
Last updated: July 19, 2020
Source : ArXiv
No. | Year | Title | URL |
---|---|---|---|
1 | 2020 | MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization | View |
2 | 2020 | Event-QA: A Dataset for Event-Centric Question Answering over Knowledge Graphs | View |
3 | 2020 | Rapidly Bootstrapping a Question Answering Dataset for COVID-19 | View |
4 | 2020 | HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data | View |
5 | 2020 | A Methodology for Creating Question Answering Corpora Using Inverse Data Annotation | View |
6 | 2020 | Practical Annotation Strategies for Question Answering Datasets | View |
7 | 2020 | Exploring BERT Parameter Efficiency on the Stanford Question Answering Dataset v2 | View |
8 | 2020 | FQuAD: French Question Answering Dataset | View |
9 | 2019 | JEC-QA: A Legal-Domain Question Answering Dataset | View |
10 | 2019 | QASC: A Dataset for Question Answering via Sentence Composition | View |
11 | 2019 | PubMedQA: A Dataset for Biomedical Research Question Answering | View |
12 | 2019 | TWEETQA: A Social Media Focused Question Answering Dataset | View |
13 | 2018 | POIReviewQA: A Semantically Enriched POI Retrieval and Question Answering Dataset | View |
14 | 2018 | HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering | View |
15 | 2018 | Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering | View |
16 | 2018 | Transforming Question Answering Datasets Into Natural Language Inference Datasets | View |
17 | 2018 | ODSQA: Open-domain Spoken Question Answering Dataset | View |
18 | 2018 | Analysis of Wikipedia-based Corpora for Question Answering | View |
19 | 2017 | Quasar: Datasets for Question Answering by Search and Reading | View |