Combinatorial Methods in Density Estimation

Density estimation has evolved enormously since the days of bar plots and histograms, but researchers and users are still struggling with the problem of the selection of the bin widths. This text explores a new paradigm for the data-based or automatic selection of the free parameters of density esti...

Full description

Bibliographic Details
Main Authors: Devroye, Luc, Lugosi, Gabor (Author)
Format: eBook
Language:English
Published: New York, NY Springer New York 2001, 2001
Edition:1st ed. 2001
Series:Springer Series in Statistics
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 07237nmm a2200337 u 4500
001 EB000620892
003 EBX01000000000000000473974
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9781461301257 
100 1 |a Devroye, Luc 
245 0 0 |a Combinatorial Methods in Density Estimation  |h Elektronische Ressource  |c by Luc Devroye, Gabor Lugosi 
250 |a 1st ed. 2001 
260 |a New York, NY  |b Springer New York  |c 2001, 2001 
300 |a XII, 209 p  |b online resource 
505 0 |a §14.5. Exercises -- §14.6. References -- 15. Minimax Theory -- §15.1. Estimating a Density from One Data Point -- §15.2. The General Minimax Problem -- §15.3. Rich Classes -- §15.4. Assouad’s Lemma -- §15.5. Example: The Class of Convex Densities -- §15.6. Additional Examples -- §15.7. Tuning the Parameters of Variable Kernel Estimates -- §15.8. Sufficient Statistics -- §15.9. Bibliographic Remarks -- §15.10. Exercises -- §15.11. References -- 16. Choosing the Kernel Order -- §16.1. Introduction -- §16.2. Standard Kernel Estimate: Riemann Kernels -- §16.3. Standard Kernel Estimates: General Kernels -- §16.4. An Infinite Family of Kernels -- §16.5. Bibliographic Remarks -- §16.6. Exercises -- §16.7. References -- 17. Bandwidth Choice with Superkernels -- §17.1. Superkernels -- §17.2. The Trapezoidal Kernel -- §17.3. Bandwidth Selection -- §17.4. Bibliographic Remarks -- §17.5. Exercises -- §17.6. References -- Author Index 
505 0 |a §11.2. General Kernels, Kernel Complexity -- §11.3. Kernel Complexity: Univariate Examples -- §11.4. Kernel Complexity: Multivariate Kernels -- §11.5. Asymptotic Optimality -- §11.6. Bibliographic Remarks -- §11.7. Exercises -- §11.8. References -- 12. Multiparameter Kernel Estimates -- §12.1. Multivariate Kernel Estimates—Product Kernels -- §12.2. Multivariate Kernel Estimates—Ellipsoidal Kernels -- §12.3. Variable Kernel Estimates -- §12.4. Tree-Structured Partitions -- §12.5. Changepoints and Bump Hunting -- §12.6. Bibliographic Remarks -- §12.7. Exercises -- §12.8. References -- 13. Wavelet Estimates -- §13.1. Definitions -- §13.2. Smoothing -- §13.3. Thresholding -- §13.4. Soft Thresholding -- §13.5. BibliographicRemarks -- §13.6. Exercises -- §13.7. References -- 14. The Transformed Kernel Estimate -- §14.1. The Transformed Kernel Estimate -- §14.2. Box-Cox Transformations -- §14.3. Piecewise Linear Transformations -- §14.4. Bibliographic Remarks --  
505 0 |a §8.3. Parametric Estimates: Exponential Families -- §8.4. Neural Network Estimates -- §8.5. Mixture Classes, Radial Basis Function Networks -- §8.6. Bibliographic Remarks -- §8.7. Exercises -- §8.8. References -- 9. The Kernel Density Estimate -- §9.1. Approximating Functions by Convolutions -- §9.2. Definition of the Kernel Estimate -- §9.3. Consistency of the Kernel Estimate -- §9.4. Concentration -- §9.5. Choosing the Bandwidth -- §9.6. Choosing the Kernel -- §9.7. Rates of Convergence -- §9.8. Uniform Rate of Convergence -- §9.9. Shrinkage, and the Combination of Density Estimates -- §9.10. Bibliographic Remarks -- §9.11. Exercises -- §9.12. References -- 10. Additive Estimates and Data Splitting -- §10.1. Data Splitting -- §10.2. Additive Estimates -- §10.3. Histogram Estimates -- §10A. Bibliographic Remarks -- §10.5. Exercises -- §10.6. References -- 11. Bandwidth Selection for Kernel Estimates -- §11.1. The Kernel Estimate with Riemann Kernel --  
505 0 |a §5.6. Normalization -- §5.7. The Lebesgue Density Theorem -- §5.8. LeCam’s Inequality -- §5.9. Bibliographic Remarks -- §5.10. Exercises -- §5.11. References -- 6. Choosing a Density Estimate -- §6.1. Choosing Between Two Densities -- §6.2. Examples -- §6.3. Is the Factor of Three Necessary? -- §6.4. Maximum Likelihood Does not Work -- §6.5. L2 Distances Are To Be Avoided -- §6.6. Selection from k Densities -- §6.7. Examples Continued -- §6.8. Selection from an Infinite Class -- §6.9. Bibliographic Remarks -- §6.10. Exercises -- §6.11. References -- 7. Skeleton Estimates -- §7.1. Kolmogorov Entropy -- §7.2. Skeleton Estimates -- §7.3. Robustness -- §7.4. Finite Mixtures -- §7.5. Monotone Densities on the Hypercube -- §7.6. How To Make Gigantic Totally Bounded Classes -- §7.7. Bibliographic Remarks -- §7.8. Exercises -- §7.9. References -- 8.The Minimum Distance Estimate: Examples -- §8.1. Problem Formulation -- §8.2. Series Estimates --  
505 0 |a 1. Introduction -- §1.1. References -- 2. Concentration Inequalities -- §2.1. Hoeffding’s Inequality -- §2.2. An Inequality for the Expected Maximal Deviation -- §2.3. The Bounded Difference Inequality -- §2.4. Examples -- §2.5. Bibliographic Remarks -- §2.6. Exercises -- §2.7. References -- 3. Uniform Deviation Inequalities -- §3.1. The Vapnik-Chervonenkis Inequality -- §3.2. Covering Numbers and Chaining -- §3.3. Example: The Dvoretzky-Kiefer-Wolfowitz Theorem -- §3.4. Bibliographic Remarks -- §3.5. Exercises -- §3.6. References -- 4. Combinatorial Tools -- §4.1. Shatter Coefficients -- §4.2. Vapnik-Chervonenkis Dimension and Shatter Coefficients -- §4.3. Vapnik-Chervonenkis Dimension and Covering Numbers -- §4.4. Examples -- §4.5. Bibliographic Remarks -- §4.6. Exercises -- §4.7. References -- 5. Total Variation -- §5.1. Density Estimation -- §5.2. The Total Variation -- §5.3. Invariance -- §5.4. Mappings -- §5.5. Convolutions --  
653 |a Statistical Theory and Methods 
653 |a Statistics  
700 1 |a Lugosi, Gabor  |e [author] 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a Springer Series in Statistics 
028 5 0 |a 10.1007/978-1-4613-0125-7 
856 4 0 |u https://doi.org/10.1007/978-1-4613-0125-7?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 519.5 
520 |a Density estimation has evolved enormously since the days of bar plots and histograms, but researchers and users are still struggling with the problem of the selection of the bin widths. This text explores a new paradigm for the data-based or automatic selection of the free parameters of density estimates in general so that the expected error is within a given constant multiple of the best possible error. The paradigm can be used in nearly all density estimates and for most model selection problems, both parametric and nonparametric. It is the first book on this topic. The text is intended for first-year graduate students in statistics and learning theory, and offers a host of opportunities for further research and thesis topics. Each chapter corresponds roughly to one lecture, and is supplemented with many classroom exercises. A one year course in probability theory at the level of Feller's Volume 1 should be more than adequate preparation. Gabor Lugosi is Professor at Universitat Pompeu Fabra in Barcelona, and Luc Debroye is Professor at McGill University in Montreal. In 1996, the authors, together with Lászlo Györfi, published the successful text, A Probabilistic Theory of Pattern Recognition with Springer-Verlag. Both authors have made many contributions in the area of nonparametric estimation