abarbosa's picture
update configs
587c74c
|
raw
history blame
6.21 kB
metadata
pretty_name: 'JBCS2025: AES Experimental Logs and Predictions'
license: cc-by-nc-4.0
configs:
  - config_name: evaluation_results
    data_files:
      - split: evaluation_results
        path: evaluation_results-*.parquet
  - config_name: bootstrap_confidence_intervals
    data_files:
      - split: boostrap_confidence_intervals
        path: boostrap_confidence_intervals-*.parquet
tags:
  - automatic-essay-scoring
  - portuguese
  - text-classification

JBCS 2025: Experimental Artefacts for AES in Brazilian Portuguese

This repository contains all experimental artefacts (logs, configurations, predictions, and evaluation results) described in the paper:

Exploring the Usage of LLMs for Automatic Essay Scoring in Brazilian Portuguese Essays
André Barbosa, Igor Cataneo Silveira, Denis Deratani Mauá
TODO


📦 What's in this dataset repo?

This dataset is not a training dataset. Instead, it provides comprehensive logs and outputs from experiments evaluating different language models for Automatic Essay Scoring (AES) tasks in Brazilian Portuguese.

Specifically, it contains:

  • 🔁 JSONL files: raw predictions from each evaluated model.
  • 📊 CSV files: detailed performance metrics (Quadratic Weighted Kappa, F1-score, etc.).
  • ⚙️ YAML files: complete Hydra configurations for reproducibility.
  • 📋 Log files: logs detailing each evaluation run.

📚 Related Collection

All models and datasets related to this work are available in the Hugging Face collection:

🔗 AES JBCS2025 Collection


📊 Evaluated Models

The table below lists all models trained and evaluated for each essay competence (C1 to C5), along with direct links to their Hugging Face repository pages:

Model Architecture Training Type Link
mbert_base-C1 Encoder-only Fine-tuned mbert_base-C1
mbert_base-C2 Encoder-only Fine-tuned mbert_base-C2
mbert_base-C3 Encoder-only Fine-tuned mbert_base-C3
mbert_base-C4 Encoder-only Fine-tuned mbert_base-C4
mbert_base-C5 Encoder-only Fine-tuned mbert_base-C5
bertimbau_base-C1 Encoder-only Fine-tuned bertimbau_base-C1
bertimbau_base-C2 Encoder-only Fine-tuned bertimbau_base-C2
bertimbau_base-C3 Encoder-only Fine-tuned bertimbau_base-C3
bertimbau_base-C4 Encoder-only Fine-tuned bertimbau_base-C4
bertimbau_base-C5 Encoder-only Fine-tuned bertimbau_base-C5
bertimbau_large-C1 Encoder-only Fine-tuned bertimbau_large-C1
bertimbau_large-C2 Encoder-only Fine-tuned bertimbau_large-C2
bertimbau_large-C3 Encoder-only Fine-tuned bertimbau_large-C3
bertimbau_large-C4 Encoder-only Fine-tuned bertimbau_large-C4
bertimbau_large-C5 Encoder-only Fine-tuned bertimbau_large-C5
llama3-8b-C1 Decoder-only LoRA llama3-8b-C1
llama3-8b-C2 Decoder-only LoRA llama3-8b-C2
llama3-8b-C3 Decoder-only LoRA llama3-8b-C3
llama3-8b-C4 Decoder-only LoRA llama3-8b-C4
llama3-8b-C5 Decoder-only LoRA llama3-8b-C5
phi3.5-C1 Decoder-only LoRA phi3.5-C1
phi3.5-C2 Decoder-only LoRA phi3.5-C2
phi3.5-C3 Decoder-only LoRA phi3.5-C3
phi3.5-C4 Decoder-only LoRA phi3.5-C4
phi3.5-C5 Decoder-only LoRA phi3.5-C5
phi4-C1 Decoder-only LoRA phi4-C1
phi4-C2 Decoder-only LoRA phi4-C2
phi4-C3 Decoder-only LoRA phi4-C3
phi4-C4 Decoder-only LoRA phi4-C4
phi4-C5 Decoder-only LoRA phi4-C5

🧠 Additionally, API-only models (e.g., DeepSeek-R1, ChatGPT-4o, Sabiá-3) were evaluated but are not hosted on the Hub. Their predictions and logs are still included in this dataset.


🧪 How to Use this Dataset

You can easily load the data using Hugging Face datasets library:

from datasets import load_dataset
ds = load_dataset("kamel-usp/jbcs2025_experiments", split="runs")

📄 License and Citation

This work is licensed under the Creative Commons Attribution 4.0 International License (CC-BY-4.0).

If you use these artefacts, please cite our paper:

TODO