Low-Resource Language Text Classification
๐ 1 Citation
Abstract
This paper describes our approach to SemEval-2023 Task 12 on sentiment analysis for African languages. We fine-tune multilingual pretrained models (XLM-RoBERTa, AfroXLMR) for low-resource text classification.
Approach
- Multilingual pretraining: Leveraging cross-lingual transfer
- Fine-tuning strategies: Adapting to limited data scenarios
- Language coverage: Hausa, Yoruba, Igbo, and more
Related Topics
๐ Access:
Google Scholar