This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users’ fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Fan Liu is a Research Fellow with the School of Computing, National University of Singapore (NUS). His research interests lie primarily in multimedia computing and information retrieval. His work has been published in a set of top forums, including ACM SIGIR, MM, WWW, TKDE, TOIS, TMM, and TCSVT. He is an area chair of ACM MM and a senior PC member of CIKM.
Zhenyang Li is a Postdoc with the Hong Kong Generative Al Research and Development Center Limited. His research interest is primarily in recommendation and visual question answering. His work has been published in a set of top forums, including ACM MM, TIP, and TMM.
Liqiang Nie is Professor at and Dean of the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen). His research interests are primarily in multimedia computing and information retrieval. He has co-authored more than 200 articles and four books. He is a regular area chair of ACM MM, NeurIPS, IJCAI, and AAAI, and a member of the ICME steering committee. He has received many awards, like the ACM MM and SIGIR best paper honorable mention in 2019, SIGMM rising star in 2020, TR35 China 2020, DAMO Academy Young Fellow in 2020, and SIGIR best student paper in 2021.
This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users’ fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: New. Codice articolo 49563644-n
Quantità: 1 disponibili
Da: Grand Eagle Retail, Bensenville, IL, U.S.A.
Paperback. Condizione: new. Paperback. This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Codice articolo 9783031831874
Quantità: 1 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users' fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. 152 pp. Englisch. Codice articolo 9783031831874
Quantità: 2 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: As New. Unread book in perfect condition. Codice articolo 49563644
Quantità: 1 disponibili
Da: moluna, Greven, Germania
Condizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Codice articolo 2054249771
Quantità: Più di 20 disponibili
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New. Codice articolo 26403601002
Quantità: 4 disponibili
Da: Majestic Books, Hounslow, Regno Unito
Condizione: New. Print on Demand. Codice articolo 410634677
Quantità: 4 disponibili
Da: Biblios, Frankfurt am main, HESSE, Germania
Condizione: New. PRINT ON DEMAND. Codice articolo 18403600992
Quantità: 4 disponibili
Da: Revaluation Books, Exeter, Regno Unito
Paperback. Condizione: Brand New. 169 pages. 9.25x6.10x9.25 inches. In Stock. Codice articolo x-303183187X
Quantità: 1 disponibili
Da: preigu, Osnabrück, Germania
Taschenbuch. Condizione: Neu. Multimodal Learning toward Recommendation | Fan Liu (u. a.) | Taschenbuch | xvii | Englisch | 2025 | Springer | EAN 9783031831874 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu. Codice articolo 130977204
Quantità: 5 disponibili