paperback. Condizione: Very Good.
Lingua: Inglese
Editore: Springer Verlag, Singapore, Singapore, 2025
ISBN 10: 981978879X ISBN 13: 9789819788798
Da: Grand Eagle Retail, Bensenville, IL, U.S.A.
Paperback. Condizione: new. Paperback. This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index g is positive. The gamma-divergence can be defined even when the power index g is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative g. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Da: California Books, Miami, FL, U.S.A.
EUR 54,05
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: Ria Christie Collections, Uxbridge, Regno Unito
EUR 51,33
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. In.
Condizione: New.
Da: Revaluation Books, Exeter, Regno Unito
EUR 73,70
Quantità: 2 disponibili
Aggiungi al carrelloPaperback. Condizione: Brand New. 1st edition. 59 pages. 9.25x6.25x0.50 inches. In Stock.
Lingua: Inglese
Editore: Springer Verlag, Singapore, Singapore, 2025
ISBN 10: 981978879X ISBN 13: 9789819788798
Da: CitiRetail, Stevenage, Regno Unito
EUR 50,14
Quantità: 1 disponibili
Aggiungi al carrelloPaperback. Condizione: new. Paperback. This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index g is positive. The gamma-divergence can be defined even when the power index g is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative g. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
Lingua: Inglese
Editore: Springer Nature Singapore, Springer Nature Singapore Mär 2025, 2025
ISBN 10: 981978879X ISBN 13: 9789819788798
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 48,14
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Neuware -This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index ¿ is positive. The gamma-divergence can be defined even when the power index ¿ is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative ¿. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when ¿ is equal to -1.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 128 pp. Englisch.
Lingua: Inglese
Editore: Springer Nature Singapore, Springer Nature Singapore, 2025
ISBN 10: 981978879X ISBN 13: 9789819788798
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 52,95
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index Gamma is positive. The gamma-divergence can be defined even when the power index Gamma is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative Gamma. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when Gamma is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference.
Lingua: Inglese
Editore: Springer Japan, Springer Japan, 2019
ISBN 10: 4431555692 ISBN 13: 9784431555698
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 57,68
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book presents a fresh, new approach in that it provides a comprehensive recent review of challenging problems caused by imbalanced data in prediction and classification, and also in that it introduces several of the latest statistical methods of dealing with these problems. The book discusses the property of the imbalance of data from two points of view. The first is quantitative imbalance, meaning that the sample size in one population highly outnumbers that in another population. It includes presence-only data as an extreme case, where the presence of a species is confirmed, whereas the information on its absence is uncertain, which is especially common in ecology in predicting habitat distribution. The second is qualitative imbalance, meaning that the data distribution of one population can be well specified whereas that of the other one shows a highly heterogeneous property. A typical case is the existence of outliers commonly observed in gene expression data, and another is heterogeneous characteristics often observed in a case group in case-control studies. The extension of the logistic regression model, maxent, and AdaBoost for imbalanced data is discussed, providing a new framework for improvement of prediction, classification, and performance of variable selection. Weights functions introduced in the methods play an important role in alleviating the imbalance of data. This book also furnishes a new perspective on these problem and shows some applications of the recently developed statistical methods to real data sets.
Lingua: Inglese
Editore: Springer Verlag, Singapore, Singapore, 2025
ISBN 10: 981978879X ISBN 13: 9789819788798
Da: AussieBookSeller, Truganina, VIC, Australia
EUR 89,24
Quantità: 1 disponibili
Aggiungi al carrelloPaperback. Condizione: new. Paperback. This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index g is positive. The gamma-divergence can be defined even when the power index g is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative g. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.
Da: Mispah books, Redhill, SURRE, Regno Unito
EUR 109,85
Quantità: 1 disponibili
Aggiungi al carrelloPaperback. Condizione: New. New. book.
Da: Lucky's Textbooks, Dallas, TX, U.S.A.
EUR 146,59
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: Ria Christie Collections, Uxbridge, Regno Unito
EUR 136,48
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. In.
Da: California Books, Miami, FL, U.S.A.
EUR 165,63
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: Buchpark, Trebbin, Germania
EUR 76,26
Quantità: 2 disponibili
Aggiungi al carrelloCondizione: Hervorragend. Zustand: Hervorragend | Sprache: Englisch | Produktart: Bücher | This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedurevia the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine l.
Condizione: New. 1st ed. 2022 edition NO-PA16APR2015-KAP.
Lingua: Inglese
Editore: Springer, Berlin, Springer Nature Singapore, Springer, 2025
ISBN 10: 981978879X ISBN 13: 9789819788798
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 48,14
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index Gamma is positive. The gamma-divergence can be defined even when the power index Gamma is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative Gamma. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when Gamma is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference. 118 pp. Englisch.
Da: Majestic Books, Hounslow, Regno Unito
EUR 66,01
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. Print on Demand.
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 53,49
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents a fresh, new approach in that it provides a comprehensive recent review of challenging problems caused by imbalanced data in prediction and classification, and also in that it introduces several of the latest statistical methods of dealing with these problems. The book discusses the property of the imbalance of data from two points of view. The first is quantitative imbalance, meaning that the sample size in one population highly outnumbers that in another population. It includes presence-only data as an extreme case, where the presence of a species is confirmed, whereas the information on its absence is uncertain, which is especially common in ecology in predicting habitat distribution. The second is qualitative imbalance, meaning that the data distribution of one population can be well specified whereas that of the other one shows a highly heterogeneous property. A typical case is the existence of outliers commonly observed in gene expression data, and another is heterogeneous characteristics often observed in a case group in case-control studies. The extension of the logistic regression model, maxent, and AdaBoost for imbalanced data is discussed, providing a new framework for improvement of prediction, classification, and performance of variable selection. Weights functions introduced in the methods play an important role in alleviating the imbalance of data. This book also furnishes a new perspective on these problem and shows some applications of the recently developed statistical methods to real data sets. 68 pp. Englisch.
Da: Biblios, Frankfurt am main, HESSE, Germania
EUR 68,46
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. PRINT ON DEMAND.
Da: moluna, Greven, Germania
EUR 44,31
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt.
Da: moluna, Greven, Germania
EUR 48,37
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Osamu Komori, The Institute of Statistical Mathematics, Shinto Eguchi, The Institute of Statistical MathematicsThis book presents a fresh, new approach in that it provides a comprehensive recent review of challenging problems caused by imba.
Lingua: Inglese
Editore: Springer Japan, Springer Japan Jul 2019, 2019
ISBN 10: 4431555692 ISBN 13: 9784431555698
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 53,49
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - Print on Demand Titel. Neuware -This book presents a fresh, new approach in that it provides a comprehensive recent review of challenging problems caused by imbalanced data in prediction and classification, and also in that it introduces several of the latest statistical methods of dealing with these problems. The book discusses the property of the imbalance of data from two points of view. The first is quantitative imbalance, meaning that the sample size in one population highly outnumbers that in another population. It includes presence-only data as an extreme case, where the presence of a species is confirmed, whereas the information on its absence is uncertain, which is especially common in ecology in predicting habitat distribution. The second is qualitative imbalance, meaning that the data distribution of one population can be well specified whereas that of the other one shows a highly heterogeneous property. A typical case is the existence of outliers commonly observed in gene expression data, and another is heterogeneous characteristics often observed in a case group in case-control studies. The extension of the logistic regression model, maxent, and AdaBoost for imbalanced data is discussed, providing a new framework for improvement of prediction, classification, and performance of variable selection. Weights functions introduced in the methods play an important role in alleviating the imbalance of data. This book also furnishes a new perspective on these problem and shows some applications of the recently developed statistical methods to real data sets.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 68 pp. Englisch.
Da: preigu, Osnabrück, Germania
EUR 45,70
Quantità: 5 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Minimum Gamma-Divergence for Regression and Classification Problems | Shinto Eguchi | Taschenbuch | viii | Englisch | 2025 | Springer | EAN 9789819788798 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu Print on Demand.
Da: moluna, Greven, Germania
EUR 127,40
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Provides various applications including boosting and kernel methods in machine learning with a geometric invariance viewpointFacilitates a deeper understanding of the complementary relation between statistical models and estimation in the context .
Da: Majestic Books, Hounslow, Regno Unito
EUR 198,38
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. Print on Demand.
Da: preigu, Osnabrück, Germania
EUR 132,10
Quantità: 5 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. Minimum Divergence Methods in Statistical Machine Learning | From an Information Geometric Viewpoint | Shinto Eguchi (u. a.) | Buch | x | Englisch | 2022 | Springer | EAN 9784431569206 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu Print on Demand.
Da: Biblios, Frankfurt am main, HESSE, Germania
EUR 204,43
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. PRINT ON DEMAND.