KoBERT
KoBERT was developed to overcome the limitations of the existing BERT Korean language performance. KoBERT learned a large-scale corpus consisting of millions of Korean sentences collected from Wikipedia and news, and applied a data-based tokenization technique to reflect the characteristics of irregular language changes in Korean. The token alone led to a performance improvement of over 2.6%.
KoBERT uses ring-reduce-based distributed learning technology to quickly learn large amounts of data, learning more than a billion sentences quickly on multiple machines. In addition, KoBERT contributes to the spread of language understanding services in many fields by supporting various deep learning APIs including PyTorch, TensorFlow, ONNX, and MXNet.
- GitHub : https://github.com/SKTBrain/KoBERT
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.