SK Telecom, Krafton Unveil 7B Language Models for Math and Game AI

SK Telecom and Krafton have jointly developed three types of 7B (7 billion parameter) inference-specialized language models. The model released this time is a small language model specialized in solving mathematical problems and developing codes, and it applied a learning technique developed independently by Krafton. Based on this technique, the model achieved the AIME 25 mathematical reasoning benchmark.


It has proven its effectiveness by recording a clear performance improvement in mathematics. Mathematics is a field that requires spatial perception and logical reasoning capabilities, and is technically closely related to high-difficulty reasoning fields including games. Thanks to these capabilities, Krafton also expects the possibility of expanding game-centered AI technology based on the model.


SKT and Krafton jointly developed a language model, and each contributed to improving the quality and performance of the model through infrastructure construction and learning technique improvement. This collaboration is evaluated as a case that proved the capability of developing domain-specific AI models.


Krafton has developed its own incorrect answer review learning technique that analyzes the weaknesses of existing models and improves them. This technique is a strategic learning method that finds the correct answer to an incorrect question and learns by comparing it with the incorrect answer, thereby securing both inference accuracy and efficiency. SKT was responsible for building the infrastructure for data verification and model learning, contributing to ensuring model quality and stability.

Read Also
Data center campus recommended for disapproval by Fredericksburg Planning Commission
EnCap-backed Quantica launches to offer powered land to data centers in the US
Rezoning proposal for 1.2GW data center campus outside Macon, Georgia, denied recommendation for approval

Research