kachin_asr_audio / README.md
freococo's picture
Update README.md
bec7483 verified
metadata
annotations_creators:
  - manual
language:
  - kac
license: cc0-1.0
multilinguality: monolingual
pretty_name: Kachin ASR Audio
tags:
  - automatic-speech-recognition
  - audio
  - webdataset
  - kachin
  - indigenous
  - public-domain
task_categories:
  - automatic-speech-recognition
language_creators:
  - found
source_datasets:
  - original

Dataset Summary

This is the first public Kachin language ASR dataset in history.

Kachin ASR Audio is a collection of speech data in the Kachin (Jinghpaw) language, sourced entirely from publicly available PVTV (People’s Voice Television) broadcasts. The dataset includes narration, interviews, and spoken reports intended to support the development of automatic speech recognition (ASR) systems for rare-resource indigenous languages in Myanmar.

Each audio file is paired with metadata including the original filename and duration. The dataset is distributed in WebDataset .tar format for efficient streaming and scalable training workflows.

Supported Tasks and Use Cases

This dataset is primarily intended for:

  • Automatic Speech Recognition (ASR): Training and evaluating models that transcribe spoken Kachin into text.
  • Low-Resource Language Research: Developing language models, acoustic models, or linguistic tools for indigenous languages.
  • Speech Corpus Alignment: Providing a foundation for aligning future Kachin-language transcriptions or translations.
  • Government and News Voice Modeling: Supporting speech synthesis, diarization, or speaker identification tasks using public-domain voices.

Researchers, linguists, and developers working on Southeast Asian NLP or multilingual ASR systems can benefit from this dataset.

Dataset Structure

The dataset is packaged as a single .tar archive in WebDataset format. Each sample consists of:

  • <key>.mp3 — the audio file containing a speech segment in Kachin
  • <key>.json — a metadata file with the following fields: - file_name: standardized filename (e.g., 00001.mp3) - original_file: original filename before chunking (e.g., 001.mp3) - duration: length of the audio segment in seconds (float)

    All files are stored inside a train/kachin.tar archive, allowing for efficient streaming and batched processing using WebDataset-compatible tools.

Data Splits

This release provides a single train split containing all available audio chunks.

  • Train: 6,786 audio segments
  • Validation/Test: Not included in this version

If needed, users can partition the dataset into training, validation, and test sets using random sampling, duration thresholds, or speaker-based segmentation strategies.

Dataset Creation and Sources

This dataset was curated from publicly available audio sources from:

  • PVTV (People’s Voice Television) — an independent, public-facing news media channel that broadcasts audio in ethnic languages including Kachin (Jinghpaw)

All audio was downloaded directly from PVTV's YouTube channel. The content is in the public domain or released under open access media guidelines, making it legally shareable for research and development purposes.

Audio segments were chunked using a combination of silence-based segmentation and manual review. Only segments containing clear speech were retained.

Licensing and Limitations

This dataset is released under the Creative Commons Zero (CC0 1.0) license, placing it in the public domain.

✅ Permitted Uses

  • Academic and commercial use
  • Fine-tuning and publishing ASR models
  • Linguistic research on Kachin/Jinghpaw language
  • Audio analysis, transcription, and synthesis

⚠️ Limitations

  • No transcripts are currently included — this is audio-only metadata
  • Audio may contain background music, radio effects, or variable recording quality
  • Speaker identities are not labeled or anonymized

Please respect cultural and ethical norms when using indigenous voice data in production systems or media.

Usage Example (Python)

To load the kachin.tar dataset using Hugging Face datasets:

from datasets import load_dataset

dataset = load_dataset(
    "freococo/kachin_asr_audio",
    split="train",
    streaming=True
)

for sample in dataset:

    print(sample["audio"], sample["original_file"], sample["duration"])

This loads the audio + JSON metadata from the .tar directly using Hugging Face’s streaming interface (WebDataset support).

Acknowledgements

Special thanks to:

  • PVTV (People’s Voice Television) for publishing Kachin-language broadcasts freely
  • Indigenous Kachin journalists and radio presenters for their community voice
  • The open-source ASR community and Hugging Face for infrastructure and tooling

Citation

@misc{freococo2024kachin,  
  title        = {Kachin ASR Audio Dataset},  
  author       = {freococo},  
  year         = {2025},  
  howpublished = {\url{https://huggingface.co/datasets/freococo/kachin_asr_audio}},  
  note         = {Public domain speech dataset for Kachin (Jinghpaw) ASR research}  
}