• 13 Jun 2024
  • 1 Minute to read
  • Contributors
  • Dark
  • PDF


  • Dark
  • PDF

Article summary


Welcome to the "BenchMarker" feature! This tool allows you to compare the performance of two language models (LLMs) directly from the chat interface. Whether you are a simple user, an annotator, or part of a model management team, this feature provides valuable insights into the strengths and weaknesses of different models.

Getting Started

Accessing the Feature:

  1. Ensure that the "LLM Benchmarker" feature is enabled from the admin panel.
  2. Navigate to the chat interface where you will see an option to use the comparison mode.

Using Comparison Models

Selecting Models:

Choose Model 1 and Model 2 from the list of available models.
Optionally, set the temperature to control the creativity of the responses.

Receiving Responses:

When comparison mode is active, you will receive two responses for each prompt entered in the chat interface.

After reviewing the responses, indicate your preference by clicking "Select to proceed." To ask another question, you must select one of the two responses.

Collecting and Using Feedback

Data Collection:

Each time you select your preferred response, the prompt and the chosen answers are securely saved in your database.
This data is invaluable for model improvement


The "LLM Benchmarker" feature is designed to enhance your experience by allowing you to compare different language models directly from the chat interface. Your feedback is crucial for continuous improvement of the LLM models. Enjoy exploring and comparing different models to find the best fit for your needs!

Was this article helpful?