Skip to content

Commit a40a475

Browse files
authored
Create README.md
0 parents  commit a40a475

File tree

1 file changed

+19
-0
lines changed

1 file changed

+19
-0
lines changed

README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
*WORK IN PROGRESS ...*
2+
3+
The implementation of paper [**CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval**](https://arxiv.org/abs/2104.08860).
4+
5+
CLIP4Clip is a video-text retrieval model based on [CLIP (ViT-B/32)](https://github.com/openai/CLIP). We investigate three similarity calculation approaches: parameter-free type, sequential type, and tight type, in this work. The model achieve SOTA results on MSR-VTT, MSVC, and LSMDC by a significant margin.
6+
7+
# Citation
8+
If you find CLIP4Clip useful in your work, you can cite the following paper:
9+
```
10+
@Article{Luo2021CLIP4Clip,
11+
author = {Huaishao Luo and Lei Ji and Ming Zhong and Yang Chen and Wen Lei and Nan Duan and Tianrui Li},
12+
title = {CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval},
13+
journal = {arXiv preprint arXiv:2104.08860},
14+
year = {2021},
15+
}
16+
```
17+
18+
# Acknowledgments
19+
Our code is based on [CLIP (ViT-B/32)](https://github.com/openai/CLIP) and [UniVL](https://github.com/microsoft/UniVL).

0 commit comments

Comments
 (0)