- 😄 Former intern at BAAI and Zhipu AI, where my core work centered on the training of image foundation models.
- 👯 Previously a member of ByteDance’s Intelligent Creation Lab, with a focus on the DreamID series (encompassing DreamID, DreamID-V, and DreamID-Omni); currently part of ByteDance’s Seed Vision Application team, dedicated to the development of Seedance 2.0.
- 🧠 Research interests lie in Large Multimodal Models (covering multimodal generation, understanding, agents, acceleration, and efficient inference), as well as all product-related topics associated with multimodality.
- ⚡ Open to collaborations and discussions on multimodal technology and product innovation.
- 💬 Reach me via fulong_ye@163.com or yefulong@bytedance.com.
🏎️
Rush...
Pinned Loading
-
AltDiffusion
AltDiffusion PublicSource code for paper: "AltDiffusion: A multilingual Text-to-Image diffusion model"
-
FlagAI-Open/FlagAI
FlagAI-Open/FlagAI PublicFlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.
-
bytedance/DreamID-V
bytedance/DreamID-V PublicDreamID-V: Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer
-
DreamID-Omni
DreamID-Omni PublicForked from Guoxu1233/DreamID-Omni
DreamID-Omni: Unified Framework for Controllable Human-Centric Audio-Video Generation
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.
