Update README.md
Browse files
README.md
CHANGED
@@ -108,11 +108,11 @@ This model supports at most 64 frames.
|
|
108 |
## Useage
|
109 |
|
110 |
### Intended use
|
111 |
-
The model was trained on EPIC-KITCHENS-100-MQA and LLaVA-Video-178K
|
112 |
|
113 |
|
114 |
### Generation
|
115 |
-
We provide the simple generation process for using our model. For more details, you could refer to Github.
|
116 |
|
117 |
```python
|
118 |
!pip install llavaction
|
|
|
108 |
## Useage
|
109 |
|
110 |
### Intended use
|
111 |
+
The model was trained on EPIC-KITCHENS-100-MQA [dataset release pending] and [LLaVA-Video-178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K). It has improved capability on understanding human egocentric actions from videos.
|
112 |
|
113 |
|
114 |
### Generation
|
115 |
+
We provide the simple generation process for using our model. For more details, you could refer to our [Github](https://github.com/AdaptiveMotorControlLab/LLaVAction).
|
116 |
|
117 |
```python
|
118 |
!pip install llavaction
|