Lingua: Inglese
Editore: Springer-Nature New York Inc, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: Revaluation Books, Exeter, Regno Unito
EUR 41,22
Quantità: 1 disponibili
Aggiungi al carrelloHardcover. Condizione: Brand New. 220 pages. 9.45x6.62x9.69 inches. In Stock.
Lingua: Inglese
Editore: Springer Nature Switzerland AG, Cham, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: Grand Eagle Retail, Bensenville, IL, U.S.A.
Hardcover. Condizione: new. Hardcover. This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Condizione: New.
Lingua: Inglese
Editore: Springer-Nature New York Inc, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: Revaluation Books, Exeter, Regno Unito
EUR 66,11
Quantità: 1 disponibili
Aggiungi al carrelloHardcover. Condizione: Brand New. 220 pages. 9.45x6.62x9.69 inches. In Stock.
Lingua: Inglese
Editore: Springer Nature Switzerland AG, Cham, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: CitiRetail, Stevenage, Regno Unito
EUR 44,75
Quantità: 1 disponibili
Aggiungi al carrelloHardcover. Condizione: new. Hardcover. This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 42,79
Quantità: 1 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible.
Lingua: Inglese
Editore: Springer Nature Switzerland AG, Cham, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: AussieBookSeller, Truganina, VIC, Australia
EUR 79,05
Quantità: 1 disponibili
Aggiungi al carrelloHardcover. Condizione: new. Hardcover. This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.
Lingua: Inglese
Editore: Springer-Verlag Gmbh Nov 2025, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 42,79
Quantità: 2 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible. 204 pp. Englisch.
Da: Majestic Books, Hounslow, Regno Unito
EUR 68,86
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. Print on Demand.
Da: Biblios, Frankfurt am main, HESSE, Germania
EUR 70,10
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. PRINT ON DEMAND.
Lingua: Inglese
Editore: Springer, Springer Nov 2025, 2025
ISBN 10: 3032034442 ISBN 13: 9783032034441
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 42,79
Quantità: 1 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. This item is printed on demand - Print on Demand Titel. Neuware -This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible.Springer-Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 220 pp. Englisch.
Da: preigu, Osnabrück, Germania
EUR 40,20
Quantità: 5 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. Learning-from-Observation 2.0 | Automatic Acquisition of Robot Behavior from Human Demonstration | Katsushi Ikeuchi (u. a.) | Buch | xvi | Englisch | 2025 | Springer | EAN 9783032034441 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu Print on Demand.