The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for VENUS
Dataset Summary
Data from: Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues
@inproceddings{Kim2025speaking,
title={Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues},
author={Youngmin Kim},
year = {2025},
note={To appear}
}
We provide a multimodal large-scale video dataset based on nonverbal communication.
Please cite our work if you find our data helpful. (We will update citation format.)
Dataset Statistic
Split | Channels | Videos | Segments (10 min) | Frames (Nonverbal annotations) | Utterances | Words |
---|---|---|---|---|---|---|
Train | 12 | 293 | 800 | ~ | ~ | % |
Test | 4 | 113 | 200 | ~ | ~ | % |
Language
English
Other Version
Data Structure
Here's an overview of our dataset structure:
{
'channel_id': str, # YouTube channel ID
'video_id': str, # Video ID
'segment_id': int, # Segment ID within the video
'duration': str, # Total segment duration (e.g., '00:11:00 ~ 00:21:00')
'fps': int, # Frames per second
'conversation': [ # Conversation information (consisting of multiple utterances)
{
'utterance_id': int, # Utterance ID
'speaker': int, # Speaker ID (represented as an integer)
'text': str, # Full utterance text
'start_time': float, # Start time of the utterance (in seconds)
'end_time': float, # End time of the utterance (in seconds)
'words': [ # Word-level information
{
'word': str, # The word itself
'start_time': float, # Word-level start time
'end_time': float, # Word-level end time
}
]
}
],
'facial_expression': [ # Facial expression features
{
'utt_id': str, # ID of the utterance this expression is aligned to
'frame': str, # Frame identifier
'features': [float], # Facial feature vector (153-dimensional)
}
],
'body_language': [ # Body language features
{
'utt_id': str, # ID of the utterance this body language is aligned to
'frame': str, # Frame identifier
'features': [float], # Body movement feature vector (179-dimensional)
}
],
'harmful_utterance_id': [int], # List of utterance IDs identified as harmful
}
Data Instances
See above
Data Fields
See above
Data Splits
Data splits can be accessed as:
from datasets import load_dataset
train_dataset = load_dataset("winston1214/VENUS-1K", split = "train")
test_dataset = load_dataset("winston1214/VENUS-1K", split = "test")
Curation Rationale
Full details are in the paper.
Source Data
We retrieve natural videos from YouTube and annotate the FLAME and SMPL-X parameters from EMOCAv2 and OSX.
Initial Data Collection
Full details are in the paper.
Annotations
Full details are in the paper.
Annotation Process
Full details are in the paper.
Who are the annotators?
We used an automatic annotation method, and the primary annotator was Youngmin Kim, the first author of the paper.
For any questions regarding the dataset, please contact e-mail
Considerations for Using the Data
This dataset (VENUS) consists of 3D annotations of human subjects and text extracted from conversations in the videos. Please note that the dialogues are sourced from online videos and may include informal or culturally nuanced expressions. Use of this dataset should be done with care, especially in applications involving human-facing interactions.
Licensing Information
The annotations we provide are licensed under CC-BY-4.0.
- Downloads last month
- 138