Multi-Level Contrastive Learning for Cross-Lingual Alignment

Published in 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2022), 2022

Code

Abstract

Cross-language pre-trained models such as multilingual BERT (mBERT) have achieved significant performance in various cross-lingual downstream NLP tasks. This paper proposes a multi-level contrastive learning (ML-CTL) framework to further improve the cross-lingual ability of pre-trained models. The proposed method uses translated parallel data to encourage the model to generate similar semantic embeddings for different languages. However, unlike the sentence-level alignment used in most previous studies, in this paper, we explicitly integrate the word-level information of each pair of parallel sentences into contrastive learning. Moreover, cross-zero noise contrastive estimation (CZ-NCE) loss is proposed to alleviate the impact of the floating-point error in the training process with a small batch size. The proposed method significantly improves the cross-lingual transfer ability of our basic model (mBERT) and outperforms on multiple zero-shot cross-lingual downstream tasks compared to the same-size models in the Xtreme benchmark.

Poster

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

</embed>

Arxiv

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

</embed>

Slides

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

</embed>

Citation

Recommended citation: B. Chen, W. Guo, B. Gu, Q. Liu and Y. Wang, "Multi-Level Contrastive Learning for Cross-Lingual Alignment," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 7947-7951, doi: 10.1109/ICASSP43922.2022.9747720. keywords: {Training;Conferences;Semantics;Bit error rate;Estimation;Benchmark testing;Signal processing;Cross-language pre-trained model;contrastive learning;multi-level;cross-zero NCE;cross-lingual alignment}, https://ieeexplore.ieee.org/document/9747720