[R] Evaluating MLLMs with Child-Inspired Cognitive Tasks
RedditMarch 24, 2026ai

[R] Evaluating MLLMs with Child-Inspired Cognitive Tasks

Hey there, we’re sharing KidGym, an interactive 2D grid-based benchmark for evaluating MLLMs in continuous, trajectory-based interaction, accepted to ICLR 2026.

Motivation: Many existing MLLM benchmarks are static and focus on isolated skills, which makes them less faithful for characterizing model capabilities in continuous interactive settings. Inspired by the Wechsler Intelligence Scale for Children (WISC), we organize evaluation into five cognitive dimensions and design tasks to probe both single abilities and compositional abilities.

Previews of 12 tasks in KIDGYM

KidGym Features:

5 abilities: Execution, Memory, Learning, Planning, Perception Reasoning

12 task categories × 3 difficulty levels, covering single-ability and compositional tasks

Randomized layouts and diverse scenarios to emphasize generalization beyond memorization / data leakage

LLM-friendly interaction design: backpack system, hint panel, item indexing, and high-level actions

Gym-style API for easy customization, extension, and reuse by the community

Five-dimensional capability radar chart

Findings:

We find that while strong models can perform very well on some single-ability tasks, performance drops noticeably on tasks requiring:

Abstract / non-semantic visual reasoning

Numerical sensitivity / counting

Multi-rule coordination and compositional reasoning across abilities

We hope KidGym can provide a more fine-grained, interpretable, and interaction-oriented perspective for evaluating multimodal large models.

Feedback and discussion are very welcome!

Paper:https://arxiv.org/abs/2603.20209

Project Page:https://bobo-ye.github.io/KidGym/

Github:https://github.com/BoBo-Ye/KidGym

Source: Reddit · reddit.com