Creator
SenseTime & Shanghai AI Laboratory (equal contribution)
URL
https://github.com/InternLM/InternLM-techreport
Model Description
We present InternLM, a multilingual foundational language model with 104B parameters. InternLM is pre-trained on a large corpora with 1.65T tokens with a multi-phase progressive process, and then fine-tuned to align with human preferences. The evaluation on a number of benchmarks shows that InternLM achieves state-of-the-art performance in multiple aspects, including knowledge understanding, reading comprehension, reasoning, mathematics, and coding. With such well-rounded capabilities, InternLM achieves outstanding performances on comprehensive exams, including MMLU, AGIEval, and C-Eval, without resorting to external tools. It is noteworthy that InternLM even competes with GPT-4 in certain aspects. Also, InternLM demonstrates excellent capability of understanding Chinese language and Chinese culture, which makes it a suitable foundation model to support Chinese-oriented language applications.
Prompt Format
Few-shot