VENUS-1K / README.md
winston1214's picture
Update README.md
9e79641 verified
metadata
dataset_info:
  features:
    - name: channel_id
      dtype: string
    - name: video_id
      dtype: string
    - name: segment_id
      dtype: int32
    - name: duration
      dtype: string
    - name: fps
      dtype: int32
    - name: conversation
      sequence:
        - name: utterance_id
          dtype: int32
        - name: speaker
          dtype: int32
        - name: text
          dtype: string
        - name: start_time
          dtype: float32
        - name: end_time
          dtype: float32
        - name: words
          sequence:
            - name: word
              dtype: string
            - name: start_time
              dtype: float32
            - name: end_time
              dtype: float32
    - name: facial_expression
      sequence:
        - name: utt_id
          dtype: string
        - name: frame
          dtype: string
        - name: features
          sequence: float32
    - name: body_language
      sequence:
        - name: utt_id
          dtype: string
        - name: frame
          dtype: string
        - name: features
          sequence: float32
    - name: harmful_utterance_id
      sequence: int32
    - name: speaker_bbox
      list:
        - name: bbox
          sequence: int64
        - name: frame_id
          dtype: int64
  splits:
    - name: train
      num_bytes: 13538714571
      num_examples: 800
    - name: test
      num_bytes: 3766543145
      num_examples: 200
  download_size: 16484082115
  dataset_size: 17305257716
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

Dataset Card for VENUS

Dataset Summary

Data from: Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues

@article{kim2025speaking,
  title={Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues},
  author={Kim, Youngmin and Chung, Jiwan and Kim, Jisoo and Lee, Sunghyun and Lee, Sangkyu and Kim, Junhyeok and Yang, Cheoljong and Yu, Youngjae},
  journal={arXiv preprint arXiv:2506.00958},
  year={2025}
}

We provide a multimodal large-scale video dataset based on nonverbal communication.

Please cite our work if you find our data helpful.

Our dataset collection pipeline and the model implementation that uses it are available at https://github.com/winston1214/nonverbal-conversation

Dataset Statistic

Split Channels Videos Segments (10 min) Frames (Nonverbal annotations) Utterances Words
Train 12 293 800 9,727,175 11,855 1,385,404
Test 4 113 200 2,651,674 4,356 393,752

Language

English

Other Version

Data Structure

Here's an overview of our dataset structure:

{
    'channel_id': str,  # YouTube channel ID
    'video_id': str,  # Video ID
    'segment_id': int,  # Segment ID within the video
    'duration': str,  # Total segment duration (e.g., '00:11:00 ~ 00:21:00')
    'fps': int,  # Frames per second

    'conversation': [  # Conversation information (consisting of multiple utterances)
        {
            'utterance_id': int,  # Utterance ID
            'speaker': int,  # Speaker ID (represented as an integer)
            'text': str,  # Full utterance text
            'start_time': float,  # Start time of the utterance (in seconds)
            'end_time': float,  # End time of the utterance (in seconds)
            'words': [  # Word-level information
                {
                    'word': str,  # The word itself
                    'start_time': float,  # Word-level start time
                    'end_time': float,  # Word-level end time
                }
            ]
        }
    ],

    'facial_expression': [  # Facial expression features
        {
            'utt_id': str,  # ID of the utterance this expression is aligned to
            'frame': str,  # Frame identifier
            'features': [float],  # Facial feature vector (153-dimensional)
        }
    ],

    'body_language': [  # Body language features
        {
            'utt_id': str,  # ID of the utterance this body language is aligned to
            'frame': str,  # Frame identifier
            'features': [float],  # Body movement feature vector (179-dimensional)
        }
    ],

    'harmful_utterance_id': [int],  # List of utterance IDs identified as harmful

    'speaker_bbox': [  # Body language features
        {
            'frame_id': int,  # Frame identifier
            'bbox': [int],  # [x_top, y_top, x_bottom, y_bottom]
        }
    ],
}

Data Instances

See above

Data Fields

See above

Data Splits

Data splits can be accessed as:

from datasets import load_dataset
train_dataset = load_dataset("winston1214/VENUS-1K", split = "train")
test_dataset = load_dataset("winston1214/VENUS-1K", split = "test")

Curation Rationale

Full details are in the paper.

Source Data

We retrieve natural videos from YouTube and annotate the FLAME and SMPL-X parameters from EMOCAv2 and OSX.

Initial Data Collection

Full details are in the paper.

Annotations

Full details are in the paper.

Annotation Process

Full details are in the paper.

Who are the annotators?

We used an automatic annotation method, and the primary annotator was Youngmin Kim, the first author of the paper.

For any questions regarding the dataset, please contact e-mail

Considerations for Using the Data

This dataset (VENUS) consists of 3D annotations of human subjects and text extracted from conversations in the videos. Please note that the dialogues are sourced from online videos and may include informal or culturally nuanced expressions. Use of this dataset should be done with care, especially in applications involving human-facing interactions.

Licensing Information

The annotations we provide are licensed under CC-BY-4.0.