Datasets:

Modalities:
3D
Image
Languages:
English
ArXiv:
Libraries:
Datasets
FiftyOne
License:
harpreetsahota commited on
Commit
7c13ff8
·
verified ·
1 Parent(s): 895c589

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -79
README.md CHANGED
@@ -3,24 +3,24 @@ annotations_creators: []
3
  language: en
4
  size_categories:
5
  - 10K<n<100K
6
- task_categories: []
7
  task_ids: []
8
  pretty_name: SynthHuman
9
  tags:
10
  - fiftyone
11
  - group
12
- dataset_summary: '
13
 
14
 
15
 
16
 
17
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 3000 samples.
 
18
 
19
 
20
  ## Installation
21
 
22
 
23
- If you haven''t already, install FiftyOne:
24
 
25
 
26
  ```bash
@@ -42,9 +42,9 @@ dataset_summary: '
42
 
43
  # Load the dataset
44
 
45
- # Note: other available arguments include ''max_samples'', etc
46
 
47
- dataset = load_from_hub("harpreetsahota/SynthHuman")
48
 
49
 
50
  # Launch the App
@@ -52,8 +52,7 @@ dataset_summary: '
52
  session = fo.launch_app(dataset)
53
 
54
  ```
55
-
56
- '
57
  ---
58
 
59
  # Dataset Card for SynthHuman
@@ -82,7 +81,7 @@ from fiftyone.utils.huggingface import load_from_hub
82
 
83
  # Load the dataset
84
  # Note: other available arguments include 'max_samples', etc
85
- dataset = load_from_hub("harpreetsahota/SynthHuman")
86
 
87
  # Launch the App
88
  session = fo.launch_app(dataset)
@@ -93,130 +92,151 @@ session = fo.launch_app(dataset)
93
 
94
  ### Dataset Description
95
 
96
- <!-- Provide a longer summary of what this dataset is. -->
 
 
97
 
 
98
 
 
 
 
99
 
100
- - **Curated by:** [More Information Needed]
101
- - **Funded by [optional]:** [More Information Needed]
102
- - **Shared by [optional]:** [More Information Needed]
103
  - **Language(s) (NLP):** en
104
- - **License:** [More Information Needed]
105
 
106
- ### Dataset Sources [optional]
107
 
108
- <!-- Provide the basic links for the dataset. -->
109
 
110
- - **Repository:** [More Information Needed]
111
- - **Paper [optional]:** [More Information Needed]
112
- - **Demo [optional]:** [More Information Needed]
113
 
114
- ## Uses
115
 
116
- <!-- Address questions around how the dataset is intended to be used. -->
117
 
118
  ### Direct Use
119
 
120
- <!-- This section describes suitable use cases for the dataset. -->
121
 
122
- [More Information Needed]
123
 
124
- ### Out-of-Scope Use
 
 
125
 
126
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
 
127
 
128
- [More Information Needed]
 
 
 
 
129
 
130
  ## Dataset Structure
131
 
132
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
133
 
134
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
135
 
136
  ## Dataset Creation
137
 
138
  ### Curation Rationale
139
 
140
- <!-- Motivation for the creation of this dataset. -->
141
-
142
- [More Information Needed]
 
143
 
144
  ### Source Data
145
 
146
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
147
-
148
  #### Data Collection and Processing
149
 
150
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
151
 
152
- [More Information Needed]
 
 
153
 
154
- #### Who are the source data producers?
155
 
156
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
157
 
158
- [More Information Needed]
159
 
160
- ### Annotations [optional]
161
 
162
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
163
 
164
- #### Annotation process
165
 
166
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
167
 
168
- [More Information Needed]
169
 
170
- #### Who are the annotators?
171
 
172
- <!-- This section describes the people or systems who created the annotations. -->
173
 
174
- [More Information Needed]
175
 
176
- #### Personal and Sensitive Information
 
 
 
177
 
178
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
179
 
180
- [More Information Needed]
181
 
182
  ## Bias, Risks, and Limitations
183
 
184
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
185
-
186
- [More Information Needed]
 
 
187
 
188
  ### Recommendations
189
 
190
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
191
-
192
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
193
-
194
- ## Citation [optional]
 
195
 
196
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
197
 
198
  **BibTeX:**
199
 
200
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
201
 
202
  **APA:**
203
-
204
- [More Information Needed]
205
-
206
- ## Glossary [optional]
207
-
208
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
209
-
210
- [More Information Needed]
211
-
212
- ## More Information [optional]
213
-
214
- [More Information Needed]
215
-
216
- ## Dataset Card Authors [optional]
217
-
218
- [More Information Needed]
219
-
220
- ## Dataset Card Contact
221
-
222
- [More Information Needed]
 
3
  language: en
4
  size_categories:
5
  - 10K<n<100K
 
6
  task_ids: []
7
  pretty_name: SynthHuman
8
  tags:
9
  - fiftyone
10
  - group
11
+ dataset_summary: >
12
 
13
 
14
 
15
 
16
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 3000
17
+ samples.
18
 
19
 
20
  ## Installation
21
 
22
 
23
+ If you haven't already, install FiftyOne:
24
 
25
 
26
  ```bash
 
42
 
43
  # Load the dataset
44
 
45
+ # Note: other available arguments include 'max_samples', etc
46
 
47
+ dataset = load_from_hub("Voxel51/SynthHuman")
48
 
49
 
50
  # Launch the App
 
52
  session = fo.launch_app(dataset)
53
 
54
  ```
55
+ license: cdla-permissive-2.0
 
56
  ---
57
 
58
  # Dataset Card for SynthHuman
 
81
 
82
  # Load the dataset
83
  # Note: other available arguments include 'max_samples', etc
84
+ dataset = load_from_hub("Voxel51/SynthHuman")
85
 
86
  # Launch the App
87
  session = fo.launch_app(dataset)
 
92
 
93
  ### Dataset Description
94
 
95
+ The SynthHuman dataset is a high-fidelity synthetic dataset created for training human-centric computer vision models. It contains 300,000 high-resolution (384×512) images with three main tasks: relative depth estimation, surface normal estimation, and soft foreground segmentation. The dataset features procedurally-generated human subjects in diverse poses, environments, lighting, and appearances, with equal distribution of faces, upper body, and full body scenarios.
96
+
97
+ Unlike scan-based synthetic datasets, SynthHuman uses high-fidelity procedural generation techniques to create detailed human representations, including strand-level hair (with hundreds of thousands of individual 3D strands per hairstyle), detailed clothing, accessories, and expressive faces. This approach enables the generation of ground-truth annotations with strand-level granularity that capture fine details like facial wrinkles, eyelids, hair strands, and subtle texture variations.
98
 
99
+ - **Curated by:** Microsoft Research, Cambridge, UK
100
 
101
+ - **Funded by:** Microsoft
102
+
103
+ - **Shared by:** Microsoft
104
 
 
 
 
105
  - **Language(s) (NLP):** en
 
106
 
107
+ - **License:** [CDLA - Permissive - 2.0](https://github.com/microsoft/DAViD/blob/main/LICENSE-CDLA-2.0.txt)
108
 
109
+ ### Dataset Sources
110
 
111
+ - **Repository:** https://aka.ms/DAViD
112
+
113
+ - **Paper:** DAViD: Data-efficient and Accurate Vision Models from Synthetic Data (arXiv:2507.15365)
114
 
115
+ - **Parsing to FiftyOne format:** https://github.com/harpreetsahota204/synthhuman_to_fiftyone
116
 
117
+ ## Uses
118
 
119
  ### Direct Use
120
 
121
+ The SynthHuman dataset is designed for training computer vision models for human-centric dense prediction tasks, specifically:
122
 
123
+ 1. **Relative depth estimation**: Predicting per-pixel depth values for human subjects
124
 
125
+ 2. **Surface normal estimation**: Predicting per-pixel surface normal vectors (xyz components)
126
+
127
+ 3. **Soft foreground segmentation**: Generating soft alpha masks to separate humans from backgrounds
128
 
129
+ The dataset enables training smaller, more efficient models that achieve state-of-the-art accuracy without requiring large-scale pretraining or complex multi-stage training pipelines. This makes it suitable for applications with computational constraints.
130
+
131
+ ### Out-of-Scope Use
132
 
133
+ The dataset should not be used for:
134
+ - Identifying or recognizing specific individuals
135
+ - Creating deceptive or misleading synthetic human content
136
+ - Applications that could violate privacy or cause harm to real individuals
137
+ - Training models for tasks beyond the three specified dense prediction tasks without proper evaluation
138
 
139
  ## Dataset Structure
140
 
141
+ The SynthHuman dataset contains 300,000 synthetic images of resolution 384×512, with equal distribution (100,000 each) across three categories:
142
+ 1. Face scenarios
143
+ 2. Upper body scenarios
144
+ 3. Full body scenarios
145
 
146
+ Each sample in the dataset includes:
147
+ - RGB rendered image
148
+ - Soft foreground mask (alpha channel)
149
+ - Surface normals (3-channel)
150
+ - Depth ground-truth annotations
151
+
152
+ The dataset is designed to be diverse in terms of:
153
+ - Human poses and expressions
154
+ - Environments and lighting conditions
155
+ - Physical appearances (body shapes, clothing, accessories)
156
+ - Camera viewpoints
157
 
158
  ## Dataset Creation
159
 
160
  ### Curation Rationale
161
 
162
+ The dataset was created to address limitations in existing human-centric computer vision datasets, which often suffer from:
163
+ 1. Imperfect ground truth annotations due to reliance on photogrammetry or noisy sensors
164
+ 2. Limited diversity in subjects and environments due to challenges in capturing in-the-wild data
165
+ 3. Inability to capture fine details like hair strands, reflective surfaces, and subtle geometric features
166
 
167
  ### Source Data
168
 
 
 
169
  #### Data Collection and Processing
170
 
171
+ The dataset generation process involved sampling from:
172
 
173
+ - Face/body shapes (from training sources and a library of 3572 scans)
174
+
175
+ - Expressions and poses (from AMASS, MANO, and other sources)
176
 
177
+ - Textures (from high-resolution face scans with expression-based dynamic wrinkle maps)
178
 
179
+ - Hair styles (548 strand-level 3D hair models, each with 100K+ strands)
180
 
181
+ - Accessories (36 glasses, 57 headwear items)
182
 
183
+ - Clothing (50+ clothing tops)
184
 
185
+ - Environments (mix of HDRIs and 3D environments)
186
 
187
+ Rendering the complete dataset took 72 hours on a cluster of 300 machines with M60 GPUs (equivalent to 2 weeks on a 4-GPU A100 machine).
188
 
189
+ #### Who are the source data producers?
190
 
191
+ The dataset was created by researchers at Microsoft Research in Cambridge, UK. The synthetic data was procedurally generated using artist-created assets, scanned data sources, and procedural generation techniques.
192
 
193
+ ### Annotations
194
 
195
+ #### Annotation process
196
 
197
+ Since this is a synthetic dataset, the annotations are generated programmatically during the rendering process rather than being manually created. This ensures perfect alignment between the RGB images and their corresponding ground truths.
198
 
199
+ Special attention was given to:
200
+ 1. **Hair representation**: A voxel-grid volume with density based on strand geometry was created, then converted to a coarse proxy mesh using marching cubes to generate interpretable normal vectors.
201
+ 2. **Transparent surfaces**: The dataset provides control over whether depth and normals of translucent surfaces (like eyeglass lenses) are included or whether they show the surface behind them.
202
+ 3. **Soft foreground masks**: Generated with pixel-perfect accuracy, including partial transparency for hair strands and other fine structures.
203
 
204
+ #### Personal and Sensitive Information
205
 
206
+ The dataset contains only synthetic human representations and does not include any real personal or sensitive information. The synthetic data generation process ensures that no real individuals are represented in the dataset.
207
 
208
  ## Bias, Risks, and Limitations
209
 
210
+ While the paper doesn't explicitly discuss biases in the dataset, there are potential limitations:
211
+ - The paper notes that there may be aspects of human diversity not yet represented in the dataset
212
+ - The synthetic nature of the data might not fully capture all real-world scenarios and edge cases
213
+ - Models trained on this data may have lower accuracy for some demographic groups (acknowledged as a potential issue in the paper)
214
+ - Failure cases noted in the paper include extreme lighting conditions, printed patterns on clothing, tattoos, and rare scale variations (e.g., a baby held in an adult's hand)
215
 
216
  ### Recommendations
217
 
218
+ Users should be aware of the following:
219
+ - The dataset creators acknowledge that the synthetic data approach helps address fairness concerns by providing precise control over the training data distribution
220
+ - Additional diversity in assets and scene variations could improve robustness to real-world scenarios
221
+ - Users should test models trained on this data across diverse real-world populations to ensure fair performance
222
+ - For applications involving human subjects, users should consider the ethical implications and potential biases
223
+ - Supplementing with real-world data for specific challenging scenarios might be beneficial
224
 
225
+ ## Citation
226
 
227
  **BibTeX:**
228
 
229
+ ```bibtex
230
+ @misc{saleh2025david,
231
+ title={{DAViD}: Data-efficient and Accurate Vision Models from Synthetic Data},
232
+ author={Fatemeh Saleh and Sadegh Aliakbarian and Charlie Hewitt and Lohit Petikam and Xiao-Xian and Antonio Criminisi and Thomas J. Cashman and Tadas Baltrušaitis},
233
+ year={2025},
234
+ eprint={2507.15365},
235
+ archivePrefix={arXiv},
236
+ primaryClass={cs.CV},
237
+ url={https://arxiv.org/abs/2507.15365},
238
+ }
239
+ ```
240
 
241
  **APA:**
242
+ Saleh, F., Aliakbarian, S., Hewitt, C., Petikam, L., Criminisi, A., Cashman, T. J., & Baltrusaitis, T. (2025). DAViD: Data-efficient and Accurate Vision Models from Synthetic Data. arXiv preprint arXiv:2507.15365.