| Model | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---|---|---|---|
| Hermes-3-Llama-3.1-8B | 41.51 | Error: File does not exist | 58.61 | 43.08 |
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| agieval_aqua_rat | 0 | acc | 26.38 | ± | 2.77 |
| acc_norm | 25.20 | ± | 2.73 | ||
| agieval_logiqa_en | 0 | acc | 39.02 | ± | 1.91 |
| acc_norm | 40.25 | ± | 1.92 | ||
| agieval_lsat_ar | 0 | acc | 23.91 | ± | 2.82 |
| acc_norm | 21.74 | ± | 2.73 | ||
| agieval_lsat_lr | 0 | acc | 50.78 | ± | 2.22 |
| acc_norm | 46.08 | ± | 2.21 | ||
| agieval_lsat_rc | 0 | acc | 59.85 | ± | 2.99 |
| acc_norm | 56.51 | ± | 3.03 | ||
| agieval_sat_en | 0 | acc | 74.27 | ± | 3.05 |
| acc_norm | 65.05 | ± | 3.33 | ||
| agieval_sat_en_without_passage | 0 | acc | 45.63 | ± | 3.48 |
| acc_norm | 42.72 | ± | 3.45 | ||
| agieval_sat_math | 0 | acc | 41.36 | ± | 3.33 |
| acc_norm | 34.55 | ± | 3.21 |
Average: 41.51%
Average: Error: File does not exist%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| truthfulqa_mc | 1 | mc1 | 40.39 | ± | 1.72 |
| mc2 | 58.61 | ± | 1.54 |
Average: 58.61%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| bigbench_causal_judgement | 0 | multiple_choice_grade | 60.53 | ± | 3.56 |
| bigbench_date_understanding | 0 | multiple_choice_grade | 66.40 | ± | 2.46 |
| bigbench_disambiguation_qa | 0 | multiple_choice_grade | 31.78 | ± | 2.90 |
| bigbench_geometric_shapes | 0 | multiple_choice_grade | 17.27 | ± | 2.00 |
| exact_str_match | 0.00 | ± | 0.00 | ||
| bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 31.00 | ± | 2.07 |
| bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 20.71 | ± | 1.53 |
| bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 50.33 | ± | 2.89 |
| bigbench_movie_recommendation | 0 | multiple_choice_grade | 38.80 | ± | 2.18 |
| bigbench_navigate | 0 | multiple_choice_grade | 56.60 | ± | 1.57 |
| bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 67.75 | ± | 1.05 |
| bigbench_ruin_names | 0 | multiple_choice_grade | 43.97 | ± | 2.35 |
| bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 39.18 | ± | 1.55 |
| bigbench_snarks | 0 | multiple_choice_grade | 68.51 | ± | 3.46 |
| bigbench_sports_understanding | 0 | multiple_choice_grade | 50.61 | ± | 1.59 |
| bigbench_temporal_sequences | 0 | multiple_choice_grade | 43.50 | ± | 1.57 |
| bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.56 | ± | 1.18 |
| bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 15.60 | ± | 0.87 |
| bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 50.33 | ± | 2.89 |
Average: 43.08%
Average score: Not available due to errors
Elapsed time: 02:48:56