Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Message from the General Chair. Message from the Program Co-Chairs. Committees. Part I: Advances in Visual Information Management I. 1. Construction of the Multimedia Mediation Systems; M. Sakauchi. Part II: Video Retrieval. 2. A New Algebraic Approach to Retrieve Meaningful Video Intervals from Fragmentarily Indexed Video Shots; S. Pradhan, et al. 3. Toward the MEdiaSys VIdeo Search Engine (MEVISE); F. Andres, et al. 4. Content-based Video Retrieval Based on Similarity of Camera Motion; H. Endoh, R. Kataoka. Part III: Information Visualization. 5. Visual Exploration for Social Recommendations; J. Tatemura. 6. Web-Based Visualization of Large Hierarchical Graphs Using Invisible Links in a Hyperbolic Space; M.C. Hao, et al. 7. Visualizing Electronic Document Repositories: Drawing Books and Papers in a Digital Library; A. Rauber, H. Bina. Part IV: Modeling and Recognition. 8. A Motion Recognition Method by Using Primitive Motions; R. Osaki, et al. Conceptual Modelling for Database User Interfaces; R. Cooper, et al. Part V: Advances in Visual Information Management II. 10. Searching, Data Mining and Visualization of Multimedia Data; C. Faloutsos. Part VI: Image Similarity Retrieval. 11. Efficient Image Retrieval by Examples; R. Brunelli, O. Mich. 12. Applying Augmented Orientation Spatial Similarity Retrieval in Pictorial Database; X.M. Zhou, et al. 13. Toward feature Algebras in Visual Databases: The Case for a Histogram Algebra; A. Gupta, S. Santini.Part VII: Spatio-Temporal Database. 14. Query-By-Trace: Visual Predicate Specification in Spatio-Temporal Databases; M. Erwig, M. Schneider. 15. Skimming Multiple Perspective Video Using Tempo-Spatial Importance Measures; T. Hata, et al. 16. Networked Augmented Spatial Hypermedia System on Internet; M. Murao, et al. Part VIII: Visual Querying. 17. Drag and Drop: Amalgamation of Authoring, Querying, and Restructuring for Multimedia View Construction; A. Morishima, et al. 18. BBQ: A Visual Interface for Integrated Browsing and Querying of XML; K.D. Munroe, Y. Papakonstantiou. 19. MDDQL: A Visual Query Language for Metadata Driven Querying; E. Kapetanios, et al. Part IX: Clustering and Retrieval. 20. Hierarchical Space Model for Multimedia Data Retrieval; M. Onizuka, S. Nishioka. 21. MST Construction with Metric Matrix for Clustering; M. Ishikawa, et al. Part X: User Interface. 22. Automatic Updates of Interactive Information Visualization User Interfaces through Database Triggers; M. Leissler, et al. 23. TBE: A Graphical Interface for Writing Trigger Rules in Active Databases; D. Lee, et al. 24. WEBSA: Database Support for Efficient Web Site Navigation; I.F. Cruz, et al. Index of contributors. Keyword index.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
Da: Buchpark, Trebbin, Germania
Condizione: Gut. Zustand: Gut | Sprache: Englisch | Produktart: Bücher. Codice articolo 2760521/3
Quantità: 1 disponibili
Da: Lucky's Textbooks, Dallas, TX, U.S.A.
Condizione: New. Codice articolo ABLIING23Feb2416190184806
Quantità: Più di 20 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: New. Codice articolo 757276-n
Quantità: Più di 20 disponibili
Da: moluna, Greven, Germania
Gebunden. Condizione: New. Codice articolo 5970464
Quantità: Più di 20 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: New. Codice articolo 757276-n
Quantità: Più di 20 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In. Codice articolo ria9780792378358_new
Quantità: Più di 20 disponibili
Da: preigu, Osnabrück, Germania
Buch. Condizione: Neu. Advances in Visual Information Management | Visual Database Systems. IFIP TC2 WG2.6 Fifth Working Conference on Visual Database Systems May 10-12, 2000, Fukuoka, Japan | Tiziana Catarci (u. a.) | Buch | xiv | Englisch | 2000 | Springer US | EAN 9780792378358 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu Print on Demand. Codice articolo 102549492
Quantità: 5 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Buch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate. 428 pp. Englisch. Codice articolo 9780792378358
Quantità: 2 disponibili
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New. pp. 428. Codice articolo 26295249
Quantità: 1 disponibili
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
Buch. Condizione: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 428 pp. Englisch. Codice articolo 9780792378358
Quantità: 1 disponibili