The statistical analysis of discrete multivariate data has received a great deal of attention in the statistics literature over the past two decades. The develop ment ofappropriate models is the common theme of books such as Cox (1970), Haberman (1974, 1978, 1979), Bishop et al. (1975), Gokhale and Kullback (1978), Upton (1978), Fienberg (1980), Plackett (1981), Agresti (1984), Goodman (1984), and Freeman (1987). The objective of our book differs from those listed above. Rather than concentrating on model building, our intention is to describe and assess the goodness-of-fit statistics used in the model verification part of the inference process. Those books that emphasize model development tend to assume that the model can be tested with one of the traditional goodness-of-fit tests 2 2 (e.g., Pearson's X or the loglikelihood ratio G ) using a chi-squared critical value. However, it is well known that this can give a poor approximation in many circumstances. This book provides the reader with a unified analysis of the traditional goodness-of-fit tests, describing their behavior and relative merits as well as introducing some new test statistics. The power-divergence family of statistics (Cressie and Read, 1984) is used to link the traditional test statistics through a single real-valued parameter, and provides a way to consolidate and extend the current fragmented literature. As a by-product of our analysis, a new 2 2 statistic emerges "between" Pearson's X and the loglikelihood ratio G that has some valuable properties.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
1 Introduction to the Power-Divergence Statistic.- 1.1 A Unified Approach to Model Testing.- 1.2 The Power-Divergence Statistic.- 1.3 Outline of the Chapters.- 2 Defining and Testing Models: Concepts and Examples.- 2.1 Modeling Discrete Multivariate Data.- 2.2 Testing the Fit of a Model.- 2.3 An Example: Time Passage and Memory Recall.- 2.4 Applying the Power-Divergence Statistic.- 2.5 Power-Divergence Measures in Visual Perception.- 3 Modeling Cross-Classified Categorical Data.- 3.1 Association Models and Contingency Tables.- 3.2 Two-Dimensional Tables: Independence and Homogeneity.- 3.3 Loglinear Models for Two and Three Dimensions.- 3.4 Parameter Estimation Methods: Minimum Distance Estimation.- 3.5 Model Generation: A Characterization of the Loglinear, Linear, and Other Models through Minimum Distance Estimation.- 3.6 Model Selection and Testing Strategy for Loglinear Models.- 4 Testing the Models: Large-Sample Results.- 4.1 Significance Levels under the Classical (Fixed-Cells) Assumptions.- 4.2 Efficiency under the Classical (Fixed-Cells) Assumptions.- 4.3 Significance Levels and Efficiency under Sparseness Assumptions.- 4.4 A Summary Comparison of the Power-Divergence Family Members.- 4.5 Which Test Statistic?.- 5 Improving the Accuracy of Tests with Small Sample Size.- 5.1 Improved Accuracy through More Accurate Moments.- 5.2 A Second-Order Correction Term Applied Directly to the Asymptotic Distribution.- 5.3 Four Approximations to the Exact Significance Level: How Do They Compare?.- 5.4 Exact Power Comparisons.- 5.5 Which Test Statistic?.- 6 Comparing the Sensitivity of the Test Statistics.- 6.1 Relative Deviations between Observed and Expected Cell Frequencies.- 6.2 Minimum Magnitude of the Power-Divergence Test Statistic.- 6.3 Further Insights into the Accuracy of Large-Sample Approximations.- 6.4 Three Illustrations.- 6.5 Transforming for Closer Asymptotic Approximations in Contingency Tables with Some Small Expected Cell Frequencies.- 6.6 A Geometric Interpretation of the Power-Divergence Statistic.- 6.7 Which Test Statistic?.- 7 Links with Other Test Statistics and Measures of Divergence.- 7.1 Test Statistics Based on Quantiles and Spacings.- 7.2 A Continuous Analogue to the Discrete Test Statistic.- 7.3 Comparisons of Discrete and Continuous Test Statistics.- 7.4 Diversity and Divergence Measures from Information Theory.- 8 Future Directions.- 8.1 Hypothesis Testing and Parameter Estimation under Sparseness Assumptions.- 8.2 The Parameter ? as a Transformation.- 8.3 A Generalization of Akaike’s Information Criterion.- 8.4 The Power-Divergence Statistic as a Measure of Loss and a Criterion for General Parameter Estimation.- 8.5 Generalizing the Multinomial Distribution.- Historical Perspective: Pearson’s X2 and the Loglikelihood Ratio Statistic G2.- 1. Small-Sample Comparisons of X2 and G2 under the Classical (Fixed-Cells) Assumptions.- 2. Comparing X2 and G2 under Sparseness Assumptions.- 3. Efficiency Comparisons.- 4. Modified Assumptions and Their Impact.- Appendix: Proofs of Important Results.- A1. Some Results on Rao Second-Order Efficiency and Hodges-Lehmann Deficiency (Section 3.4).- A2. Characterization of the Generalized Minimum Power-Divergence Estimate (Section 3.5).- A3. Characterization of the Lancaster-Additive Model (Section 3.5).- A4. Proof of Results (i), (ii), and (iii) (Section 4.1).- A5. Statement of Birch’s Regularity Conditions and Proof that the Minimum Power-Divergence Estimator Is BAN (Section 4.1).- A6. Proof of Results (i*), (ii*), and (iii*) (Section 4.1).- A7. The Power-Divergence Generalization of the Chernoff-Lehmann Statistic: An Outline (Section 4.1).- A8. Derivation of the Asymptotic Noncentral Chi-Squared Distribution for the Power-Divergence Statistic under Local Alternative Models (Section 4.2).- A9. Derivation of the Mean and Variance of the Power-Divergence Statistic for ? > -1 under a Nonlocal Alternative Model (Section 4.2).- A10. Proof of the Asymptotic Normality of the Power-Divergence Statistic under Sparseness Assumptions (Section 4.3).- A12. Derivation of the Second-Order Terms for the Distribution Function of the Power-Divergence Statistic under the Classical (Fixed-Cells) Assumptions (Section 5.2).- A13. Derivation of the Minimum Asymptotic Value of the Power-Divergence Statistic (Section 6.2).- A14. Limiting Form of the Power-Divergence Statistic as the Parameter ? ? ± ? (Section 6.2).- Author Index.
Book by Read Timothy RC Cressie Noel AC
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
EUR 29,39 per la spedizione da Regno Unito a U.S.A.
Destinazione, tempi e costiEUR 3,53 per la spedizione in U.S.A.
Destinazione, tempi e costiDa: Lucky's Textbooks, Dallas, TX, U.S.A.
Condizione: New. Codice articolo ABLIING23Mar2716030030142
Quantità: Più di 20 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In. Codice articolo ria9781461289319_new
Quantità: Più di 20 disponibili
Da: THE SAINT BOOKSTORE, Southport, Regno Unito
Paperback / softback. Condizione: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 357. Codice articolo C9781461289319
Quantità: Più di 20 disponibili
Da: AHA-BUCH GmbH, Einbeck, Germania
Taschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - The statistical analysis of discrete multivariate data has received a great deal of attention in the statistics literature over the past two decades. The develop ment ofappropriate models is the common theme of books such as Cox (1970), Haberman (1974, 1978, 1979), Bishop et al. (1975), Gokhale and Kullback (1978), Upton (1978), Fienberg (1980), Plackett (1981), Agresti (1984), Goodman (1984), and Freeman (1987). The objective of our book differs from those listed above. Rather than concentrating on model building, our intention is to describe and assess the goodness-of-fit statistics used in the model verification part of the inference process. Those books that emphasize model development tend to assume that the model can be tested with one of the traditional goodness-of-fit tests 2 2 (e.g., Pearson's X or the loglikelihood ratio G ) using a chi-squared critical value. However, it is well known that this can give a poor approximation in many circumstances. This book provides the reader with a unified analysis of the traditional goodness-of-fit tests, describing their behavior and relative merits as well as introducing some new test statistics. The power-divergence family of statistics (Cressie and Read, 1984) is used to link the traditional test statistics through a single real-valued parameter, and provides a way to consolidate and extend the current fragmented literature. As a by-product of our analysis, a new 2 2 statistic emerges 'between' Pearson's X and the loglikelihood ratio G that has some valuable properties. Codice articolo 9781461289319
Quantità: 1 disponibili
Da: Revaluation Books, Exeter, Regno Unito
Paperback. Condizione: Brand New. reprint edition. 222 pages. 9.25x6.10x0.50 inches. In Stock. Codice articolo x-1461289319
Quantità: 2 disponibili
Da: moluna, Greven, Germania
Condizione: New. Codice articolo 4191481
Quantità: Più di 20 disponibili
Da: Chiron Media, Wallingford, Regno Unito
PF. Condizione: New. Codice articolo 6666-IUK-9781461289319
Quantità: 10 disponibili
Da: dsmbooks, Liverpool, Regno Unito
Paperback. Condizione: Like New. Like New. book. Codice articolo D8P0-0-M-1461289319-6
Quantità: 1 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -The statistical analysis of discrete multivariate data has received a great deal of attention in the statistics literature over the past two decades. The develop ment ofappropriate models is the common theme of books such as Cox (1970), Haberman (1974, 1978, 1979), Bishop et al. (1975), Gokhale and Kullback (1978), Upton (1978), Fienberg (1980), Plackett (1981), Agresti (1984), Goodman (1984), and Freeman (1987). The objective of our book differs from those listed above. Rather than concentrating on model building, our intention is to describe and assess the goodness-of-fit statistics used in the model verification part of the inference process. Those books that emphasize model development tend to assume that the model can be tested with one of the traditional goodness-of-fit tests 2 2 (e.g., Pearson's X or the loglikelihood ratio G ) using a chi-squared critical value. However, it is well known that this can give a poor approximation in many circumstances. This book provides the reader with a unified analysis of the traditional goodness-of-fit tests, describing their behavior and relative merits as well as introducing some new test statistics. The power-divergence family of statistics (Cressie and Read, 1984) is used to link the traditional test statistics through a single real-valued parameter, and provides a way to consolidate and extend the current fragmented literature. As a by-product of our analysis, a new 2 2 statistic emerges 'between' Pearson's X and the loglikelihood ratio G that has some valuable properties. 228 pp. Englisch. Codice articolo 9781461289319
Quantità: 2 disponibili