| The multimodal large model HappyHorse (an open-source unified large model for text-to-video/image-to-video + audio)has recently been making waves on the international stage. After verification from multiple sources, the team behind it has been revealed: they are from the Tobao and Tmall Group (TTG) Future Life Labled by ang Di(The lab was created by the ATH-AI Innovation Business Department and has since become an independent entity). ofile of Zhang Di: He holds both a Bachelor's and Master's degree from Shanghai Jiao Tong University. He is the head of the TTG Future Life Lab (Rank: P11) and reports to Zheng Bo, Chief Scientist of TTG and CTO of Alimama. He previously served as the lead (No. 1 position) for Kuaishou’s ing.d prior to that, he was the head of Big Data and Machine Learning Engineering Architecture at Alimama. P.S.
[Basic Information]
[Video Parameters] Resolution: 1280×720 (720p) Frame Rate: 24fps Duration: 5 seconds [Audio Capabilities] Native Synchronous Generation: Sound effects / Ambient sound / Voiceover Supported Languages: Chinese, English, Japanese, Korean, German, French [Open Source Status] Fully Open Source: Base model + Distilled model + Super-resolution + Inference code Source: https://mp.weixin.qq.com/s/n66lk5q_Mm10UYTnpEOf3w?poc_token=HKwe1mmjFX-RhveuVjk_MbRgFTcirVE2tKrRP_gS [link] [comments] |