## We propose a new methodology for the evaluation of fundamental models:
21 challenging tasks for fundamental models covering issues related to world knowledge, logic, cause-and-effect relationships, AI ethics, and much more. We have developed an open instructional benchmark for evaluating large language models for the Russian language. A unified leaderboard on the website includes fixed, verified expert tasks and standardized configurations of prompts and parameters. The project has been supported by the AI Alliance, leading industrial players, and academic partners engaged in language model research.