Mark D. Shermis, Ph.D. is a professor at the University of Akron and the principal investigator of the Hewlett Foundation-funded Automated Scoring Assessment Prize (ASAP) program. He has published extensively on machine scoring and recently co-authored the textbook Classroom Assessment in Action with Francis DiVesta. Shermis is a fellow of the American Psychological Association (Division 5) and the American Educational Research Association.
Jill Burstein, Ph.D. is a managing principal research scientist in Educational Testing Service's Research and Development Division. Her research interests include natural language processing, automated essay scoring and evaluation, educational technology, discourse and sentiment analysis, English language learning, and writing research. She holds 13 patents for natural language processing educational technology applications. Two of her inventions are e-rater®, an automated essay evaluation application, and Language MuseSM, an instructional authoring tool for teachers of English learners.
This comprehensive, interdisciplinary handbook reviews the latest methods and technologies used in automated essay evaluation (AEE) methods and technologies. Highlights include the latest in the evaluation of performance-based writing assessments and recent advances in the teaching of writing, language testing, cognitive psychology, and computational linguistics. This greatly expanded follow-up to Automated Essay Scoring reflects the numerous advances that have taken place in the field since 2003 including automated essay scoring and diagnostic feedback. Each chapter features a common structure including an introduction and a conclusion. Ideas for diagnostic and evaluative feedback are sprinkled throughout the book.
Highlights of the book’s coverage include:
The latest research on automated essay evaluation.
Descriptions of the major scoring engines including the E-rater®, the Intelligent Essay Assessor, the Intellimetric™ Engine, c-rater™, and LightSIDE.
Applications of the uses of the technology including a large scale system used in West Virginia.
A systematic framework for evaluating research and technological results.
Descriptions of AEE methods that can be replicated for languages other than English as seen in the example from China.
Chapters from key researchers in the field.
The book opens with an introduction to AEEs and a review of the "best practices" of teaching writing along with tips on the use of automated analysis in the classroom. Next the book highlights the capabilities and applications of several scoring engines including the E-rater®, the Intelligent Essay Assessor, the Intellimetric™ engine, c-rater™, and LightSIDE. Here readers will find an actual application of the use of an AEE in West Virginia, psychometric issues related to AEEs such as validity, reliability, and scaling, and the use of automated scoring to detect reader drift, grammatical errors, discourse coherence quality, and the impact of human rating on AEEs. A review of the cognitive foundations underlying methods used in AEE is also provided. The book concludes with a comparison of the various AEE systems and speculation about the future of the field in light of current educational policy.
Ideal for educators, professionals, curriculum specialists, and administrators responsible for developing writing programs or distance learning curricula, those who teach using AEE technologies, policy makers, and researchers in education, writing, psychometrics, cognitive psychology, and computational linguistics, this book also serves as a reference for graduate courses on automated essay evaluation taught in education, computer science, language, linguistics, and cognitive psychology.
發表於2024-11-28
Handbook of Automated Essay Evaluation 2024 pdf epub mobi 電子書 下載
圖書標籤: 語言學 計算機 英語學習 testing AES
Handbook of Automated Essay Evaluation 2024 pdf epub mobi 電子書 下載