🇹🇷 BERTurkQA for Turkish Question-Answering

This model is a fine-tuned version of BERTurk 32k on the SQuAD-TR, a machine‑translated Turkish version of the original SQuAD 2.0. For more details about the dataset, methodology, and experiments, you can refer to the corresponding research paper.


Citation

If you use this model in your research or application, please cite the following paper:

@article{incidelen8performance,
  title={Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering},
  author={{\.I}ncidelen, Mert and Aydo{\u{g}}an, Murat},
  journal={Black Sea Journal of Engineering and Science},
  volume={8},
  number={2},
  pages={15--16},
  publisher={U{\u{g}}ur {\c{S}}EN}
}

How to Use

You can use the model directly with 🤗 Transformers:

from transformers import pipeline

qa = pipeline(
    "question-answering",
    model="incidelen/bert-base-turkish-cased-qa"
)

result = qa(
    question="...",
    context="..."
)

print(result)

Evaluation Results

Exact Match (%) F1 Score (%)
55.29 70.07

Acknowledgments

Special thanks to maydogan for their contributions and support.


Downloads last month
71
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for incidelen/bert-base-turkish-cased-qa

Finetuned
(179)
this model

Dataset used to train incidelen/bert-base-turkish-cased-qa

Space using incidelen/bert-base-turkish-cased-qa 1

Collection including incidelen/bert-base-turkish-cased-qa