============================================================================ EACL 2021 Reviews for Submission #28 ============================================================================ Title: IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages Authors: Konthala Yasaswini, Karthik Puranik, Adeep Hande, Ruba Priyadharshini, Sajeetha Thavareesan and Bharathi Raja Chakravarthi ============================================================================ REVIEWER #1 ============================================================================ --------------------------------------------------------------------------- Reviewer's Scores --------------------------------------------------------------------------- Appropriateness (1-5): 5 Clarity (1-5): 5 Originality / Innovativeness (1-5): 3 Soundness / Correctness (1-5): 3 Meaningful Comparison (1-5): 4 Thoroughness (1-5): 4 Impact of Ideas or Results (1-5): 3 Recommendation (1-5): 4 Reviewer Confidence (1-5): 4 Detailed Comments --------------------------------------------------------------------------- standard transfer learning paper. Well written well organized. No major things to complain about besides its incrementality on previous work. --------------------------------------------------------------------------- ============================================================================ REVIEWER #2 ============================================================================ --------------------------------------------------------------------------- Reviewer's Scores --------------------------------------------------------------------------- Appropriateness (1-5): 5 Clarity (1-5): 5 Originality / Innovativeness (1-5): 4 Soundness / Correctness (1-5): 4 Meaningful Comparison (1-5): 5 Thoroughness (1-5): 5 Impact of Ideas or Results (1-5): 3 Recommendation (1-5): 5 Reviewer Confidence (1-5): 5 Detailed Comments --------------------------------------------------------------------------- The authors used transfer-learning based models for identifying offensive language in code mixed Dravidian languages. They have used 6 variations for the experiments. The sections 4.1 and 4.2 are misleading. It shows that M-BERT is used for Tamil and XLM-RoBERTa is used for the other two languages. However, Table 3 shows that all the variations are done for all the three languages. The model CNN-BiLSTM can be elaborated in the paper. Future directions can be included in the paper. --------------------------------------------------------------------------- Questions for Authors --------------------------------------------------------------------------- The sections 4.1 and 4.2 are misleading. It shows that M-BERT is used for Tamil and XLM-RoBERTa is used for the other two languages. However, Table 3 shows that all the variations are done for all the three languages. The model CNN-BiLSTM can be elaborated in the paper. Future directions can be included in the paper. --------------------------------------------------------------------------- ============================================================================ REVIEWER #3 ============================================================================ --------------------------------------------------------------------------- Reviewer's Scores --------------------------------------------------------------------------- Appropriateness (1-5): 5 Clarity (1-5): 4 Originality / Innovativeness (1-5): 4 Soundness / Correctness (1-5): 4 Meaningful Comparison (1-5): 3 Thoroughness (1-5): 4 Impact of Ideas or Results (1-5): 4 Recommendation (1-5): 4 Reviewer Confidence (1-5): 5 Detailed Comments --------------------------------------------------------------------------- The authors attempted the task with various transfer learning-based models. The introduction, Related work and result section is well structured and well written. I have few suggestions to improve the paper. In dataset section, the 3 papers you cited is about sentiment analysis dataset, not offensive text that organizers gave for task. Please find some other place like intro or related work to cite that as it is misleading the reader. You have not mentioned about or cited all the models that you used. Though ULMFiT gave better results for Malayalam and Tamil, I found nowhere you mentioned the same except in the analysis. Please cite all the major models you used before doing analysis. From system description, I felt like you used only M-BERT for Tamil and XLM-RoBERTa for other languages. But from the analysis and the result table, I found it is not true. Please consider mentioning all the models that you attempted in the system description part. ---------------------------------------------------------------------------