Articoli correlati a Models for Probability and Statistical Inference: Theory...

Models for Probability and Statistical Inference: Theory and Applications - Rilegato

 
9780470073728: Models for Probability and Statistical Inference: Theory and Applications

Sinossi

This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers

Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.

Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression.

Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(r) are included to help build the intuition of readers.

Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.

Informazioni sull?autore

James H. Stapleton, PhD, has recently retired after forty-nine years as professor in the Department of Statistics and Probability at Michigan State University, including eight years as chairperson and almost twenty years as graduate director. Dr. Stapleton is the author of Linear Statistical Models (Wiley), and he received his PhD in mathematical statistics from Purdue University.

Dalla quarta di copertina

This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers

Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.

Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression.

Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus® are included to help build the intuition of readers.

Estratto. © Ristampato con autorizzazione. Tutti i diritti riservati.

Models for Probability and Statistical Inference

Theory and ApplicationsBy James H. Stapleton

John Wiley & Sons

Copyright © 2008 John Wiley & Sons, Inc.
All right reserved.

ISBN: 978-0-470-07372-8

Chapter One

Discrete Probability Models

1.1 INTRODUCTION

The mathematical study of probability can be traced to the seventeenth-century correspondence between Blaise Pascal and Pierre de Fermat, French mathematicians of lasting fame. Chevalier de Mere had posed questions to Pascal concerning gambling, which led to Pascal's correspondence with Fermat. One question was this: Is a gambler equally likely to succeed in the two games: (1) at least one 6 in four throws of one six-sided die, and (2) at least one double-6 (6-6) in 24 throws of two six-sided dice? At that time it seemed to many that the answer was yes. Some believe that de Mere had empirical evidence that the first event was more likely to occur than the second, although we should be skeptical of that, since the probabilities turn out to be 0.5178 and 0.4914, quite close. After students have studied Chapter One they should be able to verify these, then, after Chapter Six, be able to determine how many times de Mere would have to play these games in order to distinguish between the probabilities.

In the eighteenth century, probability theory was applied to astronomy and to the study of errors of measurement in general. In the nineteenth and twentieth centuries, applications were extended to biology, the social sciences, medicine, engineering-to almost every discipline. Applications to genetics, for example, continue to grow rapidly, as probabilistic models are developed to handle the masses of data being collected. Large banks, credit companies, and insurance and marketing firms are all using probability and statistics to help them determine operating rules.

We begin with discrete probability theory, for which the events of interest often concern count data. Although many of the examples used to illustrate the theory involve gambling games, students should remember that the theory and methods are applicable to many disciplines.

1.2 SAMPLE SPACES, EVENTS, AND PROBABILITY MEASURES

We begin our study of probability by considering the results of 400 consecutive throws of a fair die, a six-sided cube for which each of the numbers 1, 2, ..., 6 is equally likely to be the number showing when the die is thrown.

The frequencies are:

1 2 3 4 5 6 60 73 65 58 74 70

We use these data to motivate the definitions and theory to be presented. Consider, for example, the following question: What is the probability that the five numbers appearing in five throws of a die are all different? Among the 80 consecutive sequences of five numbers above, in only four cases were all five numbers different, a relative frequency of 5/80 = 0.0625. In another experiment, with 2000 sequences of five throws each, all were different 183 times, a relative frequency of 0.0915. Is there a way to determine the long-run relative frequency? Put another way, what could we expect the relative frequency to be in 1 million throws of five dice?

It should seem reasonable that all possible sequences of five consecutive integers from 1 to 6 are equally likely. For example, prior to the 400-throw experiment, each of the first two sequences, 61635 and 52244, were equally likely. For this example, such five-digit sequences will be called outcomes or sample points. The collection of all possible such five-digit sequences will be denoted by S, the sample space. In more mathematical language, S is the Cartesian product of the set A = {1, 2, 3, 4, 5, 6} with itself five times. This collection of sequences is often written as [A.sup.(5)]. Thus, S = [A.sup.(5)] = A A A A A. The number of outcomes (or sample points) in S is [6.sup.5] = 7776. It should seem reasonable to suppose that all outcomes (five-digit sequences) have probability 1/[6.sup.5].

We have already defined a probability model for this experiment. As we will see, it is enough in cases in which the sample space is discrete (finite or countably infinite) to assign probabilities, nonnegative numbers summing to 1, to each outcome in the sample space S. A discrete probability model has been defined for an experiment when (1) a finite or countably infinite sample space has been defined, with each possible result of the experiment corresponding to exactly one outcome; and (2) probabilities, nonnegative numbers, have been assigned to the outcomes in such a way that they sum to 1. It is not necessary that the probabilities assigned all be the same as they are for this example, although that is often realistic and convenient.

We are interested in the event A that all five digits in an outcome are different. Notice that this event A is a subset of the sample space S. We say that an event A has occurred if the outcome is a member of A. In this case event A did not occur for any of the eight outcomes in the first row above.

We define the probability of the event A, denoted P(A), to be the sum of the probabilities of the outcomes in A. By defining the probability of an event in this way, we assure that the probability measure P, defined for all subsets (events, in probability language) of S, obeys certain axioms for probability measures (to be stated later). Because our probability measure P has assigned all probabilities of outcomes to be equally likely, to find P(A) it is enough for us to determine the number of outcomes N(A) in A, for then P(A) = N(A)]1/N(S)] = N(A)/N(S). Of course, this is the case only because we assigned equal probabilities to all outcomes.

To determine N(A), we can apply the multiplication principle. A is the collection of 5-tuples with all components different. Each outcome in A corresponds to a way of filling in the boxes of the following cells:

[ILLUSTRATION OMITTED]

The first cell can hold any of the six numbers. Given the number in the first cell, and given that the outcome must be in A, the second cell can be any of five numbers, all different from the number in the first cell. Similarly, given the numbers in the first two cells, the third cell can contain any of four different numbers. Continuing in this way, we find that N(A) = (6)(5)(4)(3)(2) = 720 and that P(A) = 720/7776 = 0.0926, close to the value obtained for 2000 experiments. The number N(A) = 720 is the number of permutations of six things taken five at a time, indicated by P(6, 5).

Example 1.2.1 Consider the following discrete probability model, with sample space S = {a, b, c, d, e, f}.

Outcome [omega] a b c d e f P([omega]) 0.30 0.20 0.25 0.10 0.10 0.05

Let A = {a, b, d} and B = {b, d, e}. Then A [union] B = {a, b, d, e} and P(A [union] B) = 0.3 + 0.2 + 0.1 + 0.1 = 0.7. In addition, A [intersection] B = {b, d}, so that P(A [intersection] B) = 0.2 + 0.1 = 0.3. Notice that P(A [union] B) = P(A) + P(B) - P(A [intersection] B). (Why must this be true?). The complement of an event D, denoted by [D.sup.c], is the collection of outcomes in S that are not in D. Thus, P([A.sup.c]) = P({c, e, f }) = 0.15 + 0.15 + 0.10 = 0.40. Notice that P([A.sup.c]) = 1 - P(A). Why must this be true?

Let us consider one more example before more formally stating the definitions we have already introduced.

Example 1.2.2 A penny and a dime are tossed. We are to observe the number X of heads that occur and determine P(X = k) for k = 0, 1, 2. The symbol X, used here for its convenience in defining the events [X = 0], [X = 1], and [X = 2], will be called a random variable (rv). P(X = k) is shorthand for P([X = k]). We delay a more formal discussion of random variables.

Let [S.sub.1] = {HH, HT, TH, TT} = [{H, T}.sup.(2)], where the result for the penny and dime are indicated in this order, with H denoting head and T denoting tail. It should seem reasonable to assign equal probabilities 1/4 to each of the four outcomes. Denote the resulting probability measure by [P.sub.1]. Thus, for A = [event that the coins give the same result] = {HH, TT}, [P.sub.1](A) = 1/4 + 1/4 = 1/2.

The 400 throws of a die can be used to simulate 400 throws of a coin, and therefore 200 throws of two coins, by considering 1, 2, and 3 as heads and 4, 5, and 6 as tails. For example, using the first 10 throws, proceeding across the first row, we get TH, TH, TT, HH, TT. For all 400 die throws, we get 50 cases of HH, 55 of HT, 47 of TH, and 48 of TT, with corresponding relative proportions 0.250, 0.275, 0.235, and 0.240. For the experiment with 10,000 throws, simulating 5000 pairs of coin tosses, we obtain 1288 HH's, 1215 HT's, 1232 TH's, and 1265 TT's, with relative frequencies 0.2576, 0.2430, 0.2464, and 0.2530. Our model ([S.sub.1], [P.sub.1]) seems to fit well.

For this model we get [P.sub.1](X = 0) = 1/4, [P.sub.1](X = 1) = 1/4 + 1/4 = 1/2, and [P.sub.1](X = 2) = 1/4. If we are interested only in X, we might consider a slightly smaller model, with sample space [S.sub.2] = {0, 1, 2}, where these three outcomes represent the numbers of heads occurring. Although it is tempting to make the model simpler by assigning equal probabilities 1/3, 1/3, 1/3 to these outcomes, it should be obvious that the empirical results of our experiments with 400 and 10,000 tosses are not consistent with such a model. It should seem reasonable, instead, to assign probabilities 1/4, 1/2, 1/4, thus defining a probability measure [P.sub.2] on [S.sub.2]. The model ([S.sub.2], [P.sub.2]) is a recoding or reduction of the model ([S.sup.1], [P.sub.1]), with the outcomes HT and TH of [S.sub.1] corresponding to the single outcome X = 1 of [S.sub.2], with corresponding probability determined by adding the probabilities 1/4 and 1/4 of HT and TH.

The model ([S.sub.2], [P.sub.2]) is simpler than the model ([S.sub.1], [P.sub.1]) in the sense that it has fewer outcomes. On the other hand, it is more complex in the sense that the probabilities are unequal. In choosing appropriate probability models, we often have two or more possible models. The choice of a model will depend on its approximation of experimental evidence, consistency with fundamental principles, and mathematical convenience.

Let us stop now to define more formally some of the terms already introduced.

Definition 1.2.1 A sample space is a collection S of all possible results, called outcomes, of an experiment. Each possible result of the experiment must correspond to one and only one outcome in S. A sample space is discrete if it has a finite or countably infinite number of outcomes. (A set is countably infinite if it can be put into one-to-one correspondence with the positive integers.)

Definition 1.2.2 An event is a subset of a sample space. An event A is said to occur if the outcome of an experiment is a member of A.

Definition 1.2.3 A probability measure P on a discrete sample space S is a function defined on the subsets of S such that:

(a) P({[omega]}) [greater than or equal to] 0 for all points [omega] [member of] S.

(b) P(A) = [[summation].sub.[omega][member of]A] P([omega]) for all subsets A of S.

(c) P(S) = 1.

For simplicity, we write P({[omega]}) as P([omega]).

Definition 1.2.4 A probability model is a pair (S, P), where P is a probability measure on S. In writing P({[omega]}) as P([omega]), we are abusing notation slightly by using the symbol P to denote both a function on S and a function on the subsets of S. We assume that students are familiar with the notation of set theory: union, A [union] B; intersection A [intersection] B; and complement, [A.sup.c]. Thus, for events A and B, the event A [union] B is said to occur if the outcome is a member of A or B (by "or" we include the case that the outcome is in both A and B). The event A [intersection] B is said to occur if both A and B occur. [A.sup.c], called a complement, is said to occur if A does not occur. For convenience we sometimes write A [intersection] B as AB.

We also assume that the student is familiar with the notation for relationships among sets, A [subset] B and A [contains] B. Thus, if A [subset] B, the occurrence of event A implies that B must occur. We sometimes use the language "event A implies event B." For the preceding two-coin-toss example, the event [X = 1] implies the event [X [greater than or equal to] 1].

Let 0 denote the empty event, the subset of S consisting of no outcomes. Thus, A [intersection] [A.sup.c] = 0. We say that two events A and B are mutually exclusive if their intersection is empty. That is, A [intersection] B = 0. Thus, if A and B are mutually exclusive, the occurrence of one of them implies that the other cannot occur. In set-theoretic language we say that A and B are disjoint. DeMorgan's laws give relationships among intersection, union, and complement:

(1) [(A [intersection] B).sup.c] = [A.sup.c] [union] [B.sup.c] and (2) [(A [union] B).sup.c] = [A.sup.c] [intersection] [B.sup.c].

These can be verified from a Venn diagram or by showing that any element in the set on the left is a member of the set on the right, and vice versa (see Figure 1.2.1).

Properties of a Probability Measure P on a Sample Space S

1. P(0) = 0.

2. P(S) = 1.

3. For any event A, P([A.sup.c]) = 1 - P(A).

4. For any events A and B, P(A [union] B) = P(A) + P(B) - P(A [intersection] B). For three events A, B, C, P(A [union] B [union] C) = P(A) + P(B) + P(C) - P(A [intersection] B) - P(A [intersection] C) - P(B [intersection] C) + P(A [intersection] B [intersection] C). This follows from repeated use of the identity for two events. An almost obvious similar result holds for the probability of the union of n events, with [2.sup.n] - 1 terms on the right.

5. For events A and B with A [intersection] B = 0 , P(A [union] B) = P(A) + P(B). More generally, if [A.sub.1], [A.sub.2], ..., are disjoint (mutually exclusive) events, P([[union].sup.[infinity].sub.k=1][A.sub.k]) = [[summation].sup.[infinity].sub.k=1] P([A.sub.k]). This property of P is called countable additivity. Since [A.sub.k] for k > n could be 0 , the same equality holds when [infinity] is replaced by any integer n > 0.

Let us make use of some of these properties in a few examples.

Example 1.2.3 Smith and Jones each throw three coins. Let X denote the number of heads for Smith. Let Y denote the number of heads for Jones. Find P(X = Y).

(Continues...)


Excerpted from Models for Probability and Statistical Inferenceby James H. Stapleton Copyright © 2008 by John Wiley & Sons, Inc.. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.

  • EditoreWiley-Interscience
  • Data di pubblicazione2008
  • ISBN 10 0470073721
  • ISBN 13 9780470073728
  • RilegaturaCopertina rigida
  • LinguaInglese
  • Numero edizione1
  • Numero di pagine440
  • Contatto del produttorenon disponibile

Compra usato

Condizioni: come nuovo
Unread book in perfect condition...
Visualizza questo articolo

EUR 17,33 per la spedizione da U.S.A. a Italia

Destinazione, tempi e costi

EUR 9,70 per la spedizione da Germania a Italia

Destinazione, tempi e costi

Altre edizioni note dello stesso titolo

9780470183410: Models for Probability and Statistical Inference: Theory and Applications

Edizione in evidenza

ISBN 10:  0470183411 ISBN 13:  9780470183410
Rilegato

Risultati della ricerca per Models for Probability and Statistical Inference: Theory...

Immagini fornite dal venditore

JH Stapleton
Editore: John Wiley & Sons, 2008
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato

Da: moluna, Greven, Germania

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. James H. Stapleton, PhD, has recently retired after forty-nine years as professor in the Department of Statistics and Probability at Michigan State University, including eight years as chairperson and almost twenty years as graduate director. Dr. Stapleton . Codice articolo 446911540

Contatta il venditore

Compra nuovo

EUR 150,31
Convertire valuta
Spese di spedizione: EUR 9,70
Da: Germania a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Foto dell'editore

JH Stapleton
Editore: Wiley-Blackwell, 2008
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato
Print on Demand

Da: PBShop.store UK, Fairford, GLOS, Regno Unito

Valutazione del venditore 4 su 5 stelle 4 stelle, Maggiori informazioni sulle valutazioni dei venditori

HRD. Condizione: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L1-9780470073728

Contatta il venditore

Compra nuovo

EUR 164,60
Convertire valuta
Spese di spedizione: EUR 6,48
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Foto dell'editore

JH Stapleton
Editore: Wiley-Blackwell, 2008
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato
Print on Demand

Da: PBShop.store US, Wood Dale, IL, U.S.A.

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

HRD. Condizione: New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L1-9780470073728

Contatta il venditore

Compra nuovo

EUR 171,65
Convertire valuta
Spese di spedizione: EUR 0,55
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Foto dell'editore

Stapleton, James H.
Editore: Wiley-Interscience, 2007
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato

Da: Ria Christie Collections, Uxbridge, Regno Unito

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. In. Codice articolo ria9780470073728_new

Contatta il venditore

Compra nuovo

EUR 163,33
Convertire valuta
Spese di spedizione: EUR 10,56
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Stapleton, James H.
Editore: Wiley-Interscience, 2007
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato

Da: GreatBookPrices, Columbia, MD, U.S.A.

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. Codice articolo 4224673-n

Contatta il venditore

Compra nuovo

EUR 157,45
Convertire valuta
Spese di spedizione: EUR 17,33
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Stapleton, James H.
Editore: Wiley-Interscience, 2007
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato

Da: GreatBookPricesUK, Woodford Green, Regno Unito

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. Codice articolo 4224673-n

Contatta il venditore

Compra nuovo

EUR 163,32
Convertire valuta
Spese di spedizione: EUR 17,62
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Stapleton, James H.
Editore: Wiley-Interscience, 2007
ISBN 10: 0470073721 ISBN 13: 9780470073728
Antico o usato Rilegato

Da: GreatBookPrices, Columbia, MD, U.S.A.

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: As New. Unread book in perfect condition. Codice articolo 4224673

Contatta il venditore

Compra usato

EUR 166,32
Convertire valuta
Spese di spedizione: EUR 17,33
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Stapleton, James H.
Editore: Wiley-Interscience, 2007
ISBN 10: 0470073721 ISBN 13: 9780470073728
Antico o usato Rilegato

Da: GreatBookPricesUK, Woodford Green, Regno Unito

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: As New. Unread book in perfect condition. Codice articolo 4224673

Contatta il venditore

Compra usato

EUR 166,44
Convertire valuta
Spese di spedizione: EUR 17,62
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

James H Stapleton
Editore: Wiley Dez 2007, 2007
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato

Da: AHA-BUCH GmbH, Einbeck, Germania

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Buch. Condizione: Neu. Neuware - This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression.Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(r) are included to help build the intuition of readers. Codice articolo 9780470073728

Contatta il venditore

Compra nuovo

EUR 182,00
Convertire valuta
Spese di spedizione: EUR 14,99
Da: Germania a: Italia
Destinazione, tempi e costi

Quantità: 2 disponibili

Aggiungi al carrello

Foto dell'editore

James H. Stapleton
ISBN 10: 0470073721 ISBN 13: 9780470073728
Nuovo Rilegato Prima edizione

Da: CitiRetail, Stevenage, Regno Unito

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Hardcover. Condizione: new. Hardcover. This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping. Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression. Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(r) are included to help build the intuition of readers. Serving as a text for a two semester sequence on probability and statistical inference complex Models for Probability and Statistical Inference: Theory and Applications features exercises throughout the book and selected answers (not solutions). Each section is followed by a selection of problems, from simple to more complex. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Codice articolo 9780470073728

Contatta il venditore

Compra nuovo

EUR 171,75
Convertire valuta
Spese di spedizione: EUR 35,23
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: 1 disponibili

Aggiungi al carrello

Vedi altre 11 copie di questo libro

Vedi tutti i risultati per questo libro