There have been many studies providing details on using results from the Fundamentals of Engineering (FE) exam as metrics for meeting ABET program outcomes1. However, implementing an FE-based set of metrics poses challenges not limited to assessing validity of results. Programs using FE-based metrics must also determine the position of the metrics in the overall assessment process. We present a method for using FE-based metrics as an integral part of the ABET program assessment process. The principle issues we address are: (1) The validity of using FE metrics for a group of graduating students when not all of them take the exam; (2) Establishing and quantifying levels of performance; and (3) Creating a trigger mechanism for taking action based upon longitudinal results.
The Department of Mechanical and Biomedical Engineering at Boise State University created a process which integrates metrics from the FE results with other metrics in our loop for outcomes assessment and continuous improvement. Our process prevents us from taking inappropriate action based upon isolated negative results from the FE exam. We have used our process to make a demonstrable improvement in our curriculum. Two examples of faculty action taken due to unsatisfactory and questionable results from the FE metric before our last ABET visit are presented and discussed.
© (2013), American Society for Engineering Education, Proceedings of 2013 ASEE Annual Conference & Exposition (Atlanta, GA).
Guarino, Joe C.; Ferguson, James R.; and Pakala, V. Krishna C.. (2013). "Quantitative Assessment of Program Outcomes Using Longitudinal Data from the FE Exam". 2013 ASEE Annual Conference & Exposition, Atlanta, Georgia, Proceedings, 7325-1 - 7325-10.