Da: GreatBookPrices, Columbia, MD, U.S.A.
EUR 57,71
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
Da: ThriftBooks-Atlanta, AUSTELL, GA, U.S.A.
Paperback. Condizione: Very Good. No Jacket. May have limited writing in cover pages. Pages are unmarked. ~ ThriftBooks: Read More, Spend Less.
Da: GreatBookPrices, Columbia, MD, U.S.A.
EUR 63,96
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Lingua: Inglese
Editore: Springer Nature Switzerland AG, CH, 2019
ISBN 10: 3030289532 ISBN 13: 9783030289539
Da: Rarewaves.com USA, London, LONDO, Regno Unito
EUR 66,29
Quantità: Più di 20 disponibili
Aggiungi al carrelloPaperback. Condizione: New. 2019 ed. The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
Da: Ria Christie Collections, Uxbridge, Regno Unito
EUR 53,87
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. In.
Da: Chiron Media, Wallingford, Regno Unito
EUR 50,77
Quantità: 10 disponibili
Aggiungi al carrelloPF. Condizione: New.
Da: GreatBookPricesUK, Woodford Green, Regno Unito
EUR 53,72
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: GreatBookPricesUK, Woodford Green, Regno Unito
EUR 59,21
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
EUR 17,95
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - Technical Report from the year 1998 in the subject Mathematics - Statistics, grade: 1.0, Technical University of Denmark (Institute for Mathematical Modeling), language: English, abstract: Most human brain imaging experiments involve a number of subjects that is unusually low by accepted statistical standards. Although there are a number of practical reasons for using small samples in neuroimaging we need to face the question regarding whether results obtained with only a few subjects will generalise to a larger population. In this contribution we address this issue using a Bayesian framework, derive confidence intervals for small samples experiments, and discuss the issue of the prior.
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New.
Lingua: Inglese
Editore: Springer Nature Switzerland AG, CH, 2019
ISBN 10: 3030289532 ISBN 13: 9783030289539
Da: Rarewaves.com UK, London, Regno Unito
EUR 57,33
Quantità: Più di 20 disponibili
Aggiungi al carrelloPaperback. Condizione: New. 2019 ed. The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
Da: Mispah books, Redhill, SURRE, Regno Unito
EUR 131,86
Quantità: 1 disponibili
Aggiungi al carrelloPaperback. Condizione: New. NEW. SHIPS FROM MULTIPLE LOCATIONS. book.
Da: preigu, Osnabrück, Germania
EUR 104,15
Quantità: 5 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning | Wojciech Samek (u. a.) | Taschenbuch | Lecture Notes in Computer Science | xi | Englisch | 2019 | Springer | EAN 9783030289539 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
Lingua: Inglese
Editore: Springer-Verlag New York Inc, 2019
ISBN 10: 3030289532 ISBN 13: 9783030289539
Da: Revaluation Books, Exeter, Regno Unito
EUR 166,56
Quantità: 2 disponibili
Aggiungi al carrelloPaperback. Condizione: Brand New. 438 pages. 9.50x6.25x0.75 inches. In Stock.
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 117,69
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - The development of 'intelligent' systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to 'intelligent' machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue toperform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications ofinterpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems;evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 17,95
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Technical Report from the year 1998 in the subject Mathematics - Statistics, grade: 1.0, Technical University of Denmark (Institute for Mathematical Modeling), language: English, abstract: Most human brain imaging experiments involve a number of subjects that is unusually low by accepted statistical standards. Although there are a number of practical reasons for using small samples in neuroimaging we need to face the question regarding whether results obtained with only a few subjects will generalise to a larger population. In this contribution we address this issue using a Bayesian framework, derive confidence intervals for small samples experiments, and discuss the issue of the prior. 28 pp. Englisch.
Da: Brook Bookstore On Demand, Napoli, NA, Italia
EUR 94,25
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: new. Questo è un articolo print on demand.
Da: Majestic Books, Hounslow, Regno Unito
EUR 126,48
Quantità: 1 disponibili
Aggiungi al carrelloCondizione: New. This item is printed on demand.
Lingua: Inglese
Editore: Springer, Springer Aug 2019, 2019
ISBN 10: 3030289532 ISBN 13: 9783030289539
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 117,69
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -The development of 'intelligent' systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to 'intelligent' machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue toperform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications ofinterpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems;evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. 452 pp. Englisch.
Da: Biblios, Frankfurt am main, HESSE, Germania
EUR 164,50
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. PRINT ON DEMAND.
Lingua: Inglese
Editore: Springer, Springer Aug 2019, 2019
ISBN 10: 3030289532 ISBN 13: 9783030289539
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 117,69
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - Print on Demand Titel. Neuware -The development of 'intelligent' systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to 'intelligent' machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.Springer-Verlag KG, Sachsenplatz 4-6, 1201 Wien 452 pp. Englisch.