MDtrek® Tools for Academic Medicine
Variable Milestone Evaluations
HSoft is the national leader in the development of variable milestone evaluations. Unlike traditional evaluations, our variable milestone forms change according to each trainee’s strengths and weaknesses. Depending on the importance of a particular skill, you set a threshold for success and the number of evaluations required to assure competence. Our forms will take care of the rest: populating the forms in developmentally appropriate sequence, keeping milestones where the learner has novice or apprentice level skill, and retiring milestones which the learner has mastered. This helps guide the faculty in assessing a trainee only in areas where the trainee is weak or in areas which have not yet been evaluated. It helps focus feedback on specific behaviors. It helps learners develop their own educational objectives. As a program director, you can guide advancement to increased responsibility based on clear objective criteria.
To learn more about MDtrek® Variable Milestone evaluations, please contact us to schedule an online presentation.
Frequently Asked Questions
What are milestones?
The ACGME competencies can be subdivided into subcategories of the knowledge, skill and attitudes required for a trainee to progress to greater independence and responsibility. Several subspecialties have developed expected timelines for trainee development with examples of specific skills. Milestones are measures of progress along this timeline.
Why have variable milestone evaluation forms?
Given the complexity of managing the progression of multiple learners through hundreds of assessment points, many programs devote significant resources to developing situation specific forms that ultimately have little statistical weight. These forms are difficult to integrate with each other, and have poor predictive capacity. Because forms are designed around average learners, assessment of those who excel or who have specific weaknesses is impaired. There is little ability to control for common biases related to central tendency, halo effects and compensation. While there is no perfect system for evaluating trainees, our variable milestone methodology allows the program to automate evaluation around the learner’s individual progression of skill.
How many milestones can be accommodated?
The number of milestones that can be assessed depends on your frequency of assessment and each milestone’s threshold for retirement. For instance, in one program where interns are assessed every two weeks, and most milestones require four passing evaluations, 83% of interns completed all 44 milestones in 12 months. Depending on a program’s willingness to invest in more frequent assessments, a larger number of milestones could be accommodated.
How do you assess milestones?
For each milestone, proficiency may be demonstrated by a single observation of capacity, an average quality of performance, or a frequency of high level performance. For instance, a milestone might be the quality of a specific case presentation, a description of the quality of a trainee’s case presentation on average during a time period, or the percent of presentations that were at a high level during a period of time. Behavioral research suggests that the final strategy is the most reliable in most assessments, but for specific milestones trainees might be best assessed with other criteria. The ability to set each milestone’s threshold for passing and number of observations allows a program to coordinate and combine data from each type of assessment seamlessly.
How many milestones are there on each form?
This is up to the discretion of the program. Our experience is that faculty can rarely consistently assess more than 10 -15 milestones at one time.
What do the forms look like?
The variable forms in most cases look very similar to the static forms they replace. We have multiple question formats which can be used, ranging from traditional likert scales to visual analog scales with graphical sliders that emphasize developmental stages.
What if my faculty differs from each other in their assessments?
They will. Training faculty will increase inter-rater reliability, but some variance will persist. The systematic application of repeat assessments reduces some of the impact of this variance. In addition, a program may want to have different milestone groupings for core faculty who can be trained and coached, and other groupings for faculty who participate less frequently.
What if a trainee backslides?
It is not uncommon for a trainee’s performance to decline in an area when they are being assessed in another. We can set up a system for a program director or faculty member to enter a milestone at a novice level for any milestone outside of the variable milestone progression. This automatically resets the milestone, forcing a brief retrenchment and reassessment.
How do we create summative assessments at the end of the year comparing trainees?
In most cases, completion of all milestones for a training year should correlate with satisfactory capacity to progress to the next level. We have several tools to help program directors monitor each trainee’s progress. Extraordinary residents can distinguish themselves by rapid progression through milestones or by achievement of high average scores on each milestone. A program can design an individualized algorithm on how to weight each milestone. They might emphasize last performances of a milestone or define excellence around the most advanced milestones. Ultimately, proof of capacity will come from a subjective evaluation by the faculty team regarding their assessment of a trainee’s total performance. The variable milestone form can help inform and buttress this process in whatever form.