In the rapidly evolving landscape of artificial intelligence, autonomous large language model (LLM) agents are redefining how systems reason, act, and interact with the world. These agents go beyond answering queries—they execute complex workflows, leverage external tools, and maintain persistent memory to achieve goals. However, with this transformative power comes unprecedented security challenges. Agentic AI Security: Designing and Protecting Autonomous LLM Agents with Advanced Threat Models, Prompt Engineering, and Memory Safeguards is your essential guide to building and securing these next-generation AI systems.
This comprehensive book provides AI engineers, security architects, DevSecOps professionals, and responsible AI practitioners with a robust framework to safeguard autonomous LLM agents. Across eight expertly crafted chapters, you’ll explore how to mitigate risks like prompt injection, memory poisoning, feedback loop attacks, and self-modifying agent behaviors. Learn to design secure agent architectures, implement layered defenses, and align with emerging compliance standards to ensure your systems are both powerful and trustworthy.
Inside, you’ll discover how to:
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: New. Codice articolo 51516702-n
Quantità: Più di 20 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: As New. Unread book in perfect condition. Codice articolo 51516702
Quantità: Più di 20 disponibili
Da: Rarewaves.com USA, London, LONDO, Regno Unito
Paperback. Condizione: New. Codice articolo LU-9798270171551
Quantità: Più di 20 disponibili
Da: Grand Eagle Retail, Bensenville, IL, U.S.A.
Paperback. Condizione: new. Paperback. In the rapidly evolving landscape of artificial intelligence, autonomous large language model (LLM) agents are redefining how systems reason, act, and interact with the world. These agents go beyond answering queries-they execute complex workflows, leverage external tools, and maintain persistent memory to achieve goals. However, with this transformative power comes unprecedented security challenges. Agentic AI Security: Designing and Protecting Autonomous LLM Agents with Advanced Threat Models, Prompt Engineering, and Memory Safeguards is your essential guide to building and securing these next-generation AI systems.This comprehensive book provides AI engineers, security architects, DevSecOps professionals, and responsible AI practitioners with a robust framework to safeguard autonomous LLM agents. Across eight expertly crafted chapters, you'll explore how to mitigate risks like prompt injection, memory poisoning, feedback loop attacks, and self-modifying agent behaviors. Learn to design secure agent architectures, implement layered defenses, and align with emerging compliance standards to ensure your systems are both powerful and trustworthy.Inside, you'll discover how to: Develop agent-specific threat models using STRIDE and other frameworks tailored for autonomous systems.Engineer schema-bound prompts and gated tool orchestration to prevent intent drift and unauthorized actions.Implement memory integrity checks, anomaly detection, and write controls to secure agent recall and persistence.Embed safety critics, intent modeling, and policy enforcement within the agent's reasoning loop for real-time protection.Conduct red teaming, adversarial testing, and continuous threat simulation to proactively harden agent deployments.Navigate compliance with NIST AI RMF, OWASP GenAI Top 10, and the EU AI Act for enterprise-grade, auditable AI systems.Whether you're building AI agents for real-world applications or securing enterprise-grade deployments, this book equips you with practical strategies and technical patterns to address the unique vulnerabilities of autonomous systems. Stay ahead of evolving threats and build AI agents that are not only intelligent but also secure, resilient, and aligned with ethical standards. Start mastering agentic AI security today! This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Codice articolo 9798270171551
Quantità: 1 disponibili
Da: PBShop.store UK, Fairford, GLOS, Regno Unito
PAP. Condizione: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L0-9798270171551
Quantità: Più di 20 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: New. Codice articolo 51516702-n
Quantità: Più di 20 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: As New. Unread book in perfect condition. Codice articolo 51516702
Quantità: Più di 20 disponibili
Da: CitiRetail, Stevenage, Regno Unito
Paperback. Condizione: new. Paperback. In the rapidly evolving landscape of artificial intelligence, autonomous large language model (LLM) agents are redefining how systems reason, act, and interact with the world. These agents go beyond answering queries-they execute complex workflows, leverage external tools, and maintain persistent memory to achieve goals. However, with this transformative power comes unprecedented security challenges. Agentic AI Security: Designing and Protecting Autonomous LLM Agents with Advanced Threat Models, Prompt Engineering, and Memory Safeguards is your essential guide to building and securing these next-generation AI systems.This comprehensive book provides AI engineers, security architects, DevSecOps professionals, and responsible AI practitioners with a robust framework to safeguard autonomous LLM agents. Across eight expertly crafted chapters, you'll explore how to mitigate risks like prompt injection, memory poisoning, feedback loop attacks, and self-modifying agent behaviors. Learn to design secure agent architectures, implement layered defenses, and align with emerging compliance standards to ensure your systems are both powerful and trustworthy.Inside, you'll discover how to: Develop agent-specific threat models using STRIDE and other frameworks tailored for autonomous systems.Engineer schema-bound prompts and gated tool orchestration to prevent intent drift and unauthorized actions.Implement memory integrity checks, anomaly detection, and write controls to secure agent recall and persistence.Embed safety critics, intent modeling, and policy enforcement within the agent's reasoning loop for real-time protection.Conduct red teaming, adversarial testing, and continuous threat simulation to proactively harden agent deployments.Navigate compliance with NIST AI RMF, OWASP GenAI Top 10, and the EU AI Act for enterprise-grade, auditable AI systems.Whether you're building AI agents for real-world applications or securing enterprise-grade deployments, this book equips you with practical strategies and technical patterns to address the unique vulnerabilities of autonomous systems. Stay ahead of evolving threats and build AI agents that are not only intelligent but also secure, resilient, and aligned with ethical standards. Start mastering agentic AI security today! This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Codice articolo 9798270171551
Quantità: 1 disponibili
Da: Rarewaves.com UK, London, Regno Unito
Paperback. Condizione: New. Codice articolo LU-9798270171551
Quantità: Più di 20 disponibili