Introducing ScienceBoard, a first-of-its-kind evaluation platform for multimodal agents in scientific workflows. ScienceBoard is characterized by the following core features:
We introduce ScienceBoard, a realistic and multimodal environment designed to evaluate and advance computer-using agents for scientific discovery. By integrating domain-specific software and curating a benchmark of validated workflows, ScienceBoard enables rigorous assessment of agents’ abilities to operate in real scientific settings.
OS-Genesis is built around the following components, aiming for synthesizing high-quality trajectory data for GUI agents:
Click to jump to each section.
We will release all codes for infra, benchmark, evaluation pipelines and more details. We hope ScienceBoard can inspire and boost future research advancing computer-using agents in scientific workflows.
Training with high-quality GUI trajectories is essential for enhancing agentic capabilities. Ideal GUI agent trajectories include the following key components:
After reverse task synthesis generates high-level and low-level task instructions, these instructions are executed within the GUI environment to create complete trajectories.
To ensure the quality and utility of these trajectories, OS-Genesis employs a Trajectory Reward Model (TRM). Built upon GPT-4o, TRM evaluates each trajectory based on completion (task fulfillment) and coherence (logical sequence of actions), assigning a graded reward score from 1 to 5. Unlike traditional binary filtering methods, TRM allows even incomplete but valuable trajectories to contribute to training.
After reverse task synthesis generates high-level and low-level task instructions, these instructions are executed within the GUI environment to create complete trajectories.
pie chart and table hereTo ensure the quality and utility of these trajectories, OS-Genesis employs a Trajectory Reward Model (TRM). Built upon GPT-4o, TRM evaluates each trajectory based on completion (task fulfillment) and coherence (logical sequence of actions), assigning a graded reward score from 1 to 5. Unlike traditional binary filtering methods, TRM allows even incomplete but valuable trajectories to contribute to training.
We first evaluate OS-Genesis on mobiles tasks, covering AndroidWorld
AndroidWorld: OS-Genesis demonstrates exceptional performance on the AndroidWorld benchmark, significantly narrowing the gap between open-source agents and the state-of-the-art GPT-4o-based M3A agent. Training with OS-Genesis-synthesized data achieves nearly double the success rates compared to task-driven methods, with a success rate improvement from 9.82% to 17.41% for Qwen2-VL-7B and substantial gains for other backbones like InternVL2-8B.
AndroidControl: On AndroidControl, OS-Genesis showcases strong OOD capability, outperforming baselines in both high/low-level tasks despite encountering only 20 of 833 apps during synthesis. It achieves superior action and planning, validating our exploration-first approach for generating diverse, high-quality tasks and adapting effectively to unseen environments.
Then, we evaluate OS-Genesis on web task, using challenging online benchmark: WebArena
WebArena: On WebArena, OS-Genesis delivers notable performance improvements across diverse 5 navigation scenarios, outperforming task-driven baselines and achieving significant gains with InternVL2-8B and Qwen2-VL-7B backbones. By leveraging reverse task synthesis, OS-Genesis effectively explores the rich interactive elements of web environments, producing more meaningful and diverse trajectories.
Chart here, GPT4o + xWe investigate the impact of data scale on building GUI agents. To explore this, we partition the data synthesized by OS-Genesis into subsets, ranging from small-scale trajectories to those exceeding the size used in main experiments. Using AndroidWorld as our testbed, we focus on two primary questions: (1) How does performance improve as the data scale increases? (2) Does performance saturate at higher data scales?
As shown above, task performance generally improves as the number of trajectories increases, while saturation emerges at larger data scales.
We also investigate the gap between OS-Genesis-synthesized data and human-annotated data. (1) trajectories from OS-Genesis v.s. human-annotated trajectories. We select 1K crowdsourced trajectories from AndroidControl training set for comparison. As shown below, we significantly narrow the performance gap between synthetic trajectories and human-annotated trajectories. This is notably evident in high-level tasks, demonstrating that agents trained on OS-Genesis trajectories can plan and solve problems more closely aligned with human manners. In terms of average success rate, viewing human-annotated data as the gold standard, the performance retention rate of OS-Genesis data surpasses 80%.
(2) high-level instructions synthesized through OS-Genesis v.s. human-written instructions. For comparison, we match 500 human-written tasks from the AndroidControl training set and use GPT-4o for exploration. As observed, even when high-level instructions are written by human, their performance falls short compared to OS-Genesis's instructions. This can be attributed to two main factors: (a) Pre-defined tasks sometimes fail to align with the dynamic environment, and (b) Models may introduce errors when interpreting the intentions of human annotators. In contrast, OS-Genesis generates data in a progressive way, grounded in low-level interactions, which makes it inherently more suitable for unsupervised exploration and adaptation.
OS-Genesis is a data synthesis pipeline designed to revolutionize the construction of GUI agent trajectories. Reverse task synthesis enables the generation of diverse and coherent tasks by retroactively deriving instructions from observed interactions, while TRM ensures the quality of trajectories through graded evaluations. Together, OS-Genesis address critical challenges in GUI agent trajectories construction, paving the way for high-quality agentic data generation. We hope that it can provide a promising direction for generating high-quality trajectory data for GUI agents, bringing the community one step closer to achieving digital automation.
@article{sun2025scienceboard,
title={ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows},
author={Qiushi Sun and Zhoumianze Liu and Chang Ma and Zichen Ding and Fangzhi Xu and Zhangyue Yin and Haiteng Zhao and Zhenyu Wu and Kanzhi Cheng and Zhaoyang Liu and others},
year={2025},
journal={arXiv preprint arXiv:2505.19897}
}