Turkish Question-Answering 🇹🇷
Collection
4 items
•
Updated
This model is a fine-tuned version of BERTurk 32k on the SQuAD-TR, a machine‑translated Turkish version of the original SQuAD 2.0. For more details about the dataset, methodology, and experiments, you can refer to the corresponding research paper.
If you use this model in your research or application, please cite the following paper:
@article{incidelen8performance,
title={Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering},
author={{\.I}ncidelen, Mert and Aydo{\u{g}}an, Murat},
journal={Black Sea Journal of Engineering and Science},
volume={8},
number={2},
pages={15--16},
publisher={U{\u{g}}ur {\c{S}}EN}
}
You can use the model directly with 🤗 Transformers:
from transformers import pipeline
qa = pipeline(
"question-answering",
model="incidelen/bert-base-turkish-cased-qa"
)
result = qa(
question="...",
context="..."
)
print(result)
Exact Match (%) | F1 Score (%) |
---|---|
55.29 | 70.07 |
Special thanks to maydogan for their contributions and support.
Base model
dbmdz/bert-base-turkish-cased