Professor McAllester received his B.S., M.S., and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979, and 1987 respectively. He served on the faculty of Cornell University for the academic year of 1987-1988 and served on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) since 1997. Since 2002 he has been Chief Academic Officer at the Toyota Technological Institute at Chicago (TTIC). He has received two 20 year "test of time" awards --- for a paper on systematic nonlinear planning at the AAAI conference and for a paper on interval methods for constraint solving at the International Conference of Logic Programming. Professor McAllester's research areas include machine learning, the theory of programming languages, automated reasoning, AI planning, computer game playing (computer chess), computational linguistics and computer vision. A 1991 paper on AI planning proved to be one of the most influential papers of the decade in that area. A 1992 paper with Robert Givan introduced natural logic for more efficient automated reasoning, later reinvented by MacCartney and Manning. A 1993 paper on computer game algorithms influenced the design of the algorithms used in the Deep Blue system that defeated Gary Kasparov. A 1998 paper on machine learning theory introduced PAC-Bayesian theorems which combine Bayesian and nonBayesian methods. A 2001 paper with Andrew Appel introduced the influential step indexed model of recursive types in programming languages. A 2002 paper introduced a general framework for dynamic programming based on bottom-up logic programming which became the foundation of Jason Eisner's Dyna programming language. Prof. McAllester was part of a team (with Pedro Felzenszwalb, Ross Girshick and Deva Ramanan) that developed the deformable part model (DPM) which dominated visual object detection methods from 2007 through 2011. He was part of a team (with Raquel Urtasun and Koichiro Yamaguchi) that dominated the KITTI leaderboard for stereo vision and optical flow from 2012 to 2015. In 2013 he introduced a theoretical analysis of droupout learning for deep neural networks based on PAC-Bayesian generalization bounds. In 2015 he completed a formulation of morphoid type theory --- a classically typed foundation of mathematics supporting the notion of isomorphism and avoiding the complexities of homotopy type theory.