Data science today is a lot like the Wild West: there’s endless opportunity and
excitement, but also a lot of chaos and confusion. If you’re new to data science and
applied machine learning, evaluating a machine-learning model can seem pretty overwhelming.
Now you have help. With this O’Reilly report, machine-learning expert Alice Zheng takes
you through the model evaluation basics.
In this overview, Zheng first introduces the machine-learning workflow, and then dives into
evaluation metrics and model selection. The latter half of the report focuses on
hyperparameter tuning and A/B testing, which may benefit more seasoned machine-learning
practitioners.
With this report, you will:
Learn the stages involved when developing a machine-learning model for use in a software
application
Understand the metrics used for supervised learning models, including classification,
regression, and ranking
Walk through evaluation mechanisms, such as hold?out validation, cross-validation, and
bootstrapping
Explore hyperparameter tuning in detail, and discover why it’s so difficult
Learn the pitfalls of A/B testing, and examine a promising alternative: multi-armed bandits
Get suggestions for further reading, as well as useful software packages
Alice Zheng is the Director of Data Science at Dato, a Seattle-based startup that offers
powerful large-scale machine learning and graph analytics tools. A tool builder and an
expert in machine-learning algorithms, her research spans software diagnosis, computer
network security, and social network analysis.
發表於2024-11-27
Evaluating Machine Learning Models 2024 pdf epub mobi 電子書 下載
圖書標籤: 機器學習 數據挖掘 MachineLearning SEA Experimentation&CausalInference Data_Science
模型評估方麵還是講的不錯的,而且A/B testing方麵特彆有啓發。
評分實用~
評分模型評估方麵還是講的不錯的,而且A/B testing方麵特彆有啓發。
評分實用~
評分模型評估方麵還是講的不錯的,而且A/B testing方麵特彆有啓發。
Evaluating Machine Learning Models 2024 pdf epub mobi 電子書 下載