dc.description.abstract | The emergence of Wikipedia has completely changed the habit of learning new knowledge. It has developed more than 298 different language versions. However, it exists imbalances in the number of articles between different language versions of Wikipedia. The number of articles in English Wikipedia is far ahead over other languages. Take Chinese Wikipedia as an example, the number of articles is only one-sixth of the English Wikipedia. In addition, Wikipedia′s cross-language links between different language versions are also seriously lacking. According to the statistics, only 2.3% of English Wikipedia articles have cross-language links to its Chinese versions.
Despite of Wikipedia, some other countries has its own online encyclopedias and its content is much more abundant than its language versions of Wikipedia. Therefore, we aim to build a "cross-language online encyclopedia between "English Wikipedia" and "Baidu Baike". It is not only contributing to global knowledge sharing, but more conducive to cross-language related research.
In previous CLAL works, their methods usually depend on the language characteristics and the structure of encyclopedia. Therefore, we propose a deep learning model, which only uses the textual main content as the basis of the training data, and various neural networks to distinguish the semantic similarity of the cross-language articles. When facing data in different language versions, the only thing to do is replacing the pre-training word embedding. | en_US |