Simon Foucart (Texas A & M University)
Mathematical Pictures at a Data Science Exhibition
Simon Foucart (Texas A & M University)
Mathematical Pictures at a Data Science Exhibition
- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
This text explores a diverse set of data science topics through a mathematical lens, helping mathematicians become acquainted with data science in general, and machine learning, optimal recovery, compressive sensing, optimization, and neural networks in particular. It will also be valuable to data scientists seeking mathematical sophistication.
Andere Kunden interessierten sich auch für
Jeffrey A. Fessler (Ann Arbor University of Michigan)Linear Algebra for Data Science, Machine Learning, and Signal Processing45,99 €
John H. Maindonald (Statistics Research Associates, Wellington, NewA Practical Guide to Data Analysis Using R116,99 €
Marcus HutterAn Introduction to Universal Artificial Intelligence65,99 €
Marcus HutterAn Introduction to Universal Artificial Intelligence166,99 €
Tong Zhang (Hong Kong University of Science and Technology)Mathematical Analysis of Machine Learning Algorithms41,99 €
David FosterGenerative Deep Learning50,99 €
Lijing Wang (California Stanford University)Data Science for the Geosciences37,99 €-
-
-
This text explores a diverse set of data science topics through a mathematical lens, helping mathematicians become acquainted with data science in general, and machine learning, optimal recovery, compressive sensing, optimization, and neural networks in particular. It will also be valuable to data scientists seeking mathematical sophistication.
Produktdetails
- Produktdetails
- Verlag: Cambridge University Press
- Seitenzahl: 340
- Erscheinungstermin: 29. März 2022
- Englisch
- Abmessung: 229mm x 152mm x 19mm
- Gewicht: 506g
- ISBN-13: 9781009001854
- ISBN-10: 100900185X
- Artikelnr.: 63264562
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
- Verlag: Cambridge University Press
- Seitenzahl: 340
- Erscheinungstermin: 29. März 2022
- Englisch
- Abmessung: 229mm x 152mm x 19mm
- Gewicht: 506g
- ISBN-13: 9781009001854
- ISBN-10: 100900185X
- Artikelnr.: 63264562
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
Simon Foucart is Professor of Mathematics at Texas A&M University, where he was named Presidential Impact Fellow in 2019. He has previously written, together with Holger Rauhut, the influential book A Mathematical Introduction to Compressive Sensing (2013).
Part I. Machine Learning: 1. Rudiments of Statistical Learning
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.
Part I. Machine Learning: 1. Rudiments of Statistical Learning
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.







