EUR 48,28
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
EUR 57,68
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
EUR 58,02
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
EUR 58,96
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
Lingua: Inglese
Editore: Springer-Nature New York Inc, 2025
ISBN 10: 3032052491 ISBN 13: 9783032052490
Da: Revaluation Books, Exeter, Regno Unito
EUR 67,42
Quantità: 2 disponibili
Aggiungi al carrelloPaperback. Condizione: Brand New. 65 pages. 9.25x6.10x9.21 inches. In Stock.
Lingua: Inglese
Editore: Springer-Verlag Gmbh Nov 2025, 2025
ISBN 10: 3032052491 ISBN 13: 9783032052490
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 48,14
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book explores the most common generative AI (GenAI) tools and techniques used by malicious actors for hacking and cyber-deception, along with the security risks of large language models (LLMs). It also covers how LLM deployment and use can be secured, and how generative AI can be utilized in SOC automation.The rapid advancements and growing variety of publicly available generative AI tools enables cybersecurity use cases for threat modeling, security awareness support, web application scanning, actionable insights, and alert fatigue prevention. However, they also came with a steep rise in the number of offensive/rogue/malicious generative AI applications. With large language models, social engineering tactics can reach new heights in the efficiency of phishing campaigns and cyber-deception via synthetic media generation (misleading deepfake images and videos, faceswapping, morphs, and voice clones). The result is a new era of cybersecurity that necessitates innovative approaches to detect and mitigate sophisticated cyberattacks, and to prevent hyper-realistic cyber-deception.This work provides a starting point for researchers and students diving into malicious chatbot use, system administrators trying to harden the security of GenAI deployments, and organizations prone to sensitive data leak through shadow AI. It also benefits SOC analysts considering generative AI for partially automating incident detection and response, and GenAI vendors working on security guardrails against malicious prompting. 71 pp. Englisch.