The explosion of data science (DS) in all areas of technology coupled with the rapid growth of machine learning (ML) techniques in the last decade create novel applications in automation. Many working with DS techniques rely on the concept of “black boxes” to explain how ML works, noting that algorithms find patterns in the data that humans might not. While the mathematics are still being developed, the implications for the application of ML, specifically to questions of automation, also are being studied, but still remain poorly understood. The decisions made by ML practitioners with respect to data selection, model training and testing, data visualization, and model applications remain relatively unconstrained and have the potential to yield unexpected results at the systems level. Unfortunately, human factors engineers concerned with automation often have limited training and awareness of DS and ML applications and are unable to provide the meaningful guidance that is needed to ensure the future safety of these newly emerging automated systems. Moreover, undergraduate and graduate programs in human factors engineering (HFE) have not kept pace with these developments and future HFEs may continue to find themselves unable to contribute meaningfully to the development of automated systems based on algorithms derived from ML. In this paper, human factors engineers and educators explore some of the challenges to our understanding of automation posed by specific ML techniques and contrast this with an outline of some of the historical work in HFE that has contributed to our understanding of safe and effective automation. Examples are provided from more conventional applications using both supervised and unsupervised learning techniques, that are explored with respect to implications for algorithm performance, use in system automation, and the potential for unintended results. Implications for human factors engineering education are discussed.