Primate Labs, the software studio responsible for developing the widely used Geekbench benchmarking suite, announced its latest offering today: Geekbench AI, a new benchmark to evaluate performance in machine learning.
GSM Arena confirmed the fact that it is available on iOS, Android, Windows, macOS, and Linux; it is a tool for stress testing, primarily aimed at assessing how well devices could fare when facing real-world AI-related tasks. Debuting under the name Geekbench ML during its preview phase, Geekbench AI does an all-around performance assessment of devices running artificial intelligence-based apps.
The benchmarking tool evaluates the actual performance of a CPU, GPU, or NPU while performing two tasks in machine learning. Geekbench AI generates three scores for every tested platform: single precision, half-precision, and quantized. These scores express not only how fast the AI tasks were run on the hardware but also the quality of the computation.
Performance measurement is only one dimension in Geekbench AI, with efficiency comparisons across time giving users a full portrait of what is possible on a device. It supports many AI frameworks, including CoreML for macOS and iOS, OpenVINO for Windows and Linux, QNN for Snapdragon-powered Arm PCs, and a myriad of different vendor-specific frameworks on Android.
Each test in Geekbench AI runs at least five times, ensuring that the performance is evaluated correctly. With the tool now integrated into the Geekbench browser, users will have an easy time comparing the AI processing potentials of various devices. This makes it a really handy resource for both users and developers who are looking to set machine learning and artificial intelligence standards across different platforms.
And with that, Geekbench AI is on the verge of being an indispensable tool in assessing this fast-evolving field of AI in an unyielding manner for how well the devices undertake some complex machine-learning tasks.
ANI