A2Gen: Action-Aware Generative Model for Short Video Recommendation
A new paper on arXiv proposes A2Gen, an action-aware generative sequence network for short video recommendation. Traditional models treat videos as holistic entities, missing nuanced user preferences across diverse segments. By analyzing user action timing as a temporal process, A2Gen refines actions along the temporal dimension and chains them for unified prediction. The model includes a Context-aware Attention Module. The paper is authored by researchers and published on arXiv under ID 2604.25834.
Key facts
- Paper ID: arXiv:2604.25834
- Proposes A2Gen (Action-Aware Generative Sequence Network)
- Addresses limitations of binary-classification models for short video recommendation
- Uses timing of user actions to represent diverse intentions
- Includes Context-aware Attention Module
- Published on arXiv
- Focuses on short video recommendation
- Treats user consumption as a temporal process
Entities
Institutions
- arXiv