Samsung benchmarks real productivity of enterprise AI models

0


Samsung is overcoming limitations of existing benchmarks to better assess the real-world productivity of AI models in enterprise settings. The new system, developed by Samsung Research and named TRUEBench, aims to address the growing disparity between theoretical AI performance and its actual utility in the workplace.

As businesses worldwide accelerate their adoption of large language models (LLMs) to improve their operations, a challenge has emerged: how to accurately gauge their effectiveness. Many existing benchmarks focus on academic or general knowledge tests, often limited to English and simple question and answer formats. This has created a gap that leaves enterprises without a reliable method for evaluating how an AI model will perform on complex, multilingual, and context-rich business tasks.

Samsung’s TRUEBench, short for Trustworthy Real-world Usage Evaluation Benchmark, has been developed to fill this void. It provides a comprehensive suite of metrics that assesses LLMs based on scenarios and tasks directly relevant to real-world corporate environments. The benchmark draws upon Samsung’s own extensive internal enterprise use of AI models, ensuring the evaluation criteria are grounded in genuine workplace demands.

The framework evaluates common enterprise functions such as creating content, analysing data, summarising lengthy documents, and translating materials. These are broken down into 10 distinct categories and 46 sub-categories, providing a granular view of an AI’s productivity capabilities.

“Samsung Research brings deep expertise and a competitive edge through its real-world AI experience,” said Paul (Kyungwhoon) Cheun, CTO of the DX Division at Samsung Electronics and Head of Samsung Research. “We expect TRUEBench to establish evaluation standards for productivity.”

To tackle the limitations of older benchmarks, TRUEBench is built upon a foundation of 2,485 diverse test sets spanning 12 different languages and supporting cross-linguistic scenarios. This multilingual approach is critical for global corporations where information flows across different regions. The test materials themselves reflect the variety of workplace requests, ranging from brief instructions of just eight characters to the complex analysis of documents exceeding 20,000 characters.

Samsung recognised that in a real business context, a user’s full intent is not always explicitly stated in their initial prompt. The benchmark is therefore designed to assess an AI model’s ability to understand and fulfil these implicit enterprise needs, moving beyond simple accuracy to a more nuanced measure of helpfulness and relevance.

To achieve this, Samsung Research developed a unique collaborative process between human experts and AI to create the productivity scoring criteria. Initially, human annotators establish the evaluation standards for a given task. An AI then reviews these standards, checking for potential errors, internal contradictions, or unnecessary constraints that might not reflect a realistic user expectation. Following the AI’s feedback, the human annotators refine the criteria. This iterative loop ensures the final evaluation standards are precise and reflective of a high-quality outcome.

This cross-verified process delivers an automated evaluation system that scores the performance of LLMs. By using AI to apply these refined criteria, the system minimises the subjective bias that can occur with human-only scoring, ensuring consistency and reliability across all tests. TRUEBench also employs a strict scoring model where an AI model must satisfy every condition associated with a test to receive a passing mark. This all or nothing approach for individual conditions enables a more detailed and exacting assessment of the performance of AI models across different enterprise tasks.

To boost transparency and encourage wider adoption, Samsung has made TRUEBench’s data samples and leaderboards publicly available on the global open-source platform Hugging Face. This allows developers, researchers, and enterprises to directly compare the productivity performance of up to five different AI models simultaneously. The platform provides a clear, at a glance overview of how various AIs stack up against each other on practical tasks.

As of writing, here are the top 20 models by overall ranking based on Samsung’s AI benchmark:

The full published data also includes the average length of the AI-generated responses. This allows for a simultaneous comparison of not only performance but also efficiency, a key consideration for businesses weighing operational costs and speed.

With the launch of TRUEBench, Samsung is not merely releasing another tool but is aiming to change how the industry thinks about AI performance. By moving the goalposts from abstract knowledge to tangible productivity, Samsung’s benchmark could play a role in helping organisations make better decisions about which enterprise AI models to integrate into their workflows and bridge the gap between an AI’s potential and its proven value.

See also: Inside Huawei’s plan to make thousands of AI chips think like one computer

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

You might also like
Leave A Reply

Your email address will not be published.