| Model | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---|---|---|---|
| Cybermodel | 36.9 | Error: File does not exist | 60.35 | 38.21 |
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| agieval_aqua_rat | 0 | acc | 20.87 | ± | 2.55 |
| acc_norm | 19.69 | ± | 2.50 | ||
| agieval_logiqa_en | 0 | acc | 34.10 | ± | 1.86 |
| acc_norm | 35.18 | ± | 1.87 | ||
| agieval_lsat_ar | 0 | acc | 20.43 | ± | 2.66 |
| acc_norm | 19.13 | ± | 2.60 | ||
| agieval_lsat_lr | 0 | acc | 44.31 | ± | 2.20 |
| acc_norm | 40.20 | ± | 2.17 | ||
| agieval_lsat_rc | 0 | acc | 53.90 | ± | 3.04 |
| acc_norm | 48.33 | ± | 3.05 | ||
| agieval_sat_en | 0 | acc | 67.96 | ± | 3.26 |
| acc_norm | 61.65 | ± | 3.40 | ||
| agieval_sat_en_without_passage | 0 | acc | 44.66 | ± | 3.47 |
| acc_norm | 37.86 | ± | 3.39 | ||
| agieval_sat_math | 0 | acc | 40.00 | ± | 3.31 |
| acc_norm | 33.18 | ± | 3.18 |
Average: 36.9%
Average: Error: File does not exist%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| truthfulqa_mc | 1 | mc1 | 42.23 | ± | 1.73 |
| mc2 | 60.35 | ± | 1.50 |
Average: 60.35%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| bigbench_causal_judgement | 0 | multiple_choice_grade | 61.58 | ± | 3.54 |
| bigbench_date_understanding | 0 | multiple_choice_grade | 62.06 | ± | 2.53 |
| bigbench_disambiguation_qa | 0 | multiple_choice_grade | 33.72 | ± | 2.95 |
| bigbench_geometric_shapes | 0 | multiple_choice_grade | 23.68 | ± | 2.25 |
| exact_str_match | 0.00 | ± | 0.00 | ||
| bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 26.80 | ± | 1.98 |
| bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 19.86 | ± | 1.51 |
| bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 38.00 | ± | 2.81 |
| bigbench_movie_recommendation | 0 | multiple_choice_grade | 35.20 | ± | 2.14 |
| bigbench_navigate | 0 | multiple_choice_grade | 49.60 | ± | 1.58 |
| bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 62.10 | ± | 1.09 |
| bigbench_ruin_names | 0 | multiple_choice_grade | 34.82 | ± | 2.25 |
| bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 27.35 | ± | 1.41 |
| bigbench_snarks | 0 | multiple_choice_grade | 62.43 | ± | 3.61 |
| bigbench_sports_understanding | 0 | multiple_choice_grade | 50.20 | ± | 1.59 |
| bigbench_temporal_sequences | 0 | multiple_choice_grade | 28.30 | ± | 1.43 |
| bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 19.68 | ± | 1.12 |
| bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 14.46 | ± | 0.84 |
| bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 38.00 | ± | 2.81 |
Average: 38.21%
Average score: Not available due to errors
Elapsed time: 03:23:37