A Class Admitted on Merit

In our earlier post “WCS Doesn’t Matter for Athletes & Minorities” we found out that the Whole Candidate Score is bypassed for certain preferred categories of applicant. In this post, we explore its impact.

Specifically we try to answer the questions: What would a class built solely on the Whole Candidate Score look like? How would it differ from the classes that actually attended?

We examined this previously in our post Merit Based Class Composition. However we have some new tools that may provide additional insights into the question.

To do this we review our dataset using the calculators and tables available at https://www.gainserviceacademyadmission.com/ (Hereafter “GSA”). The author’s book which explains the WCS in great detail is available here.

Our hypothesis was that the comparing a WCS-rank-ordered list of qualified candidates to actual classes will consistently show that more athletes and minorities are admitted than a strictly merit-based regime would allow.

Indeed this is the case. Let’s take a look.

Housekeeping

First, the housekeeping and disclaimers. The method employed is to simulate the WCS using available information. This information is limited and so we acknowledge limitations on our methodology. For example:

  • We don’t have any data on the USMA-calculated WCS values by candidate (we’re kicking ourselves for not including this in our initial data request).
  • We don’t have indications of which candidates received offers, only those candidates that were classified as “qualified.” For this we can consider trends over multiple classes to compensate for any particular classes’ variations.
  • We don’t have data on faculty recommendations, CFA scores, or full scores for extracurricular participation. We do have the PAE result codes but not scores.
  • We did not have full athletic participation notes. WE could see the number of letters a candidate earned in high school but not team captaincy or other recognition/honors (e.g. all-area status).
  • We don’t have West Point’s exact point scoring algorithm, only a reverse-engineered one. This should be public, by the way, not a secret. If anyone has it, we’d appreciate a tip!

Given the relative importance of these things in the admissions we think we can offer a reasonable approach without a full dataset. More to follow.

And fortunately, we do have some important data. Referencing GSA, we have a point scale to apply. We also have test scores, high school sports played and letters, Eagle Scout/Class president status, and whether the cadet played Corps Squad sports at West Point.

We also have math and a large set of data to review. For this exercise, we are looking at the classes of 2015-2019 to keep a relatively constrained cohort but large enough to show trends.

So let’s get to it.

Methodology

We use the GSA WCS calculator to compile a set of point values for different components of the WCS. We use GSA’s WCS scoring table to map them against:

West Point Whole Candidate Score Category Breakdown Graph
Available at: https://www.gainserviceacademyadmission.com/west-point-admissions-calculator/ accessed 12/3/21. We’re hopeful that suffices for the Honor reps out there.

For the test scores, we recorded the WCS point difference at every score level for both the SAT and ACT. We noted that many of the candidates in the dataset had scores below the minimums shown in the calculator, so we extended the scales assuming that points awarded scaled linearly as scores went down.

We assumed that many letters and Corps Squad participation equaled the highest athletic tier of participation. For lower tiers of athletic scoring we assumed a correlation with the number of letters and athletic inclination. Many letters equaled a higher point award. Some letters equaled the mid tiers. No letters got the bottom tier.

We ignored the CFA and Faculty Appraisal scores. Though they are valid point sources, rather than completely guessing we assume that they are more or less extensions of the HS Rank / Extracurriculars and Athletic Activities scores.

After calculating “usmadata WCS” points for test scores, HS rank, athletics, and extracurriculars, we added them up. We then assigned a “uWCS” score rank order to the cadet records dataset and created two lists:

  1. The actual class that attended (but not necessarily graduated) West Point.
  2. A rank-ordered list of candidates for that class that were considered Qualified but did not necessarily attend. This is important because Qualified status means that the candidate was eligible for admission. In other words, candidates that were Qualified represent the opportunity West Point had to create a class.

Results

We set our flags for qualified and actual applicants and filtered and pivoted away. We filtered on race and test scores (SAT & ACT made comparable and scaled with the calculator). While we had hoped to be able to identify the impact on Corps Squad athletics, we couldn’t because the “qualified” candidate pool had no indication of which candidates might become corps squad.

The resulting table is below.

Observations:

The rank-ordered class is primarily more White and less Black and Hispanic than the actual classes. The impact is 11-13% of a class, or around 200 cadets. Asian cadets would also see an increase in representation of about 1-2% of a class, or a 12-14% increase in representation.

Certainly both athletics and racial composition goals drive this difference. Though we can’t assign either of those motivations to a particular cadet ID case, the impact is clear.

Another impact is in test scores. The average “uWCS” point value for actual class scores is 2677 and the value for rank-ordered classes is 2814. This is a difference of 136 points. Using the GSA calculator this is equivalent to 90 points on the SAT Verbal/English test or 70 points on the Math test (or some combination). The NCES tells us that a standard deviation on the SAT is 195 points. So the classes are giving up .3-.5 SD on aptitude tests by not being formulated in a rank-ordered way.

And lower test scores aren’t free. From previous posts we know that lower scores mean more separations. So by deliberately forming non-rank-ordered classes, West Point is knowingly signing up to separate more of those cadets. This is unethical.

We note that on average more women would attend West Point if classes were admitted on merit. It seems that women are lower on the admissions totem pole than football (representative of all the recruited athletics, but certainly the most visible and resource-intensive ) and ethnic representation goals.

We did not review athletic status because the qualified pool of candidates met the PAE standards and the rank order included weighting for athletics and we have no way of knowing how many would become Corps Squad. We do not consider it important how many football players a class graduates, only how many of the most capable officers. But it is worth noting the 200-300 Corps Squad athletes who dropped off the Rank-Ordered class each year, indicating that standards are substantially compromised by avoiding merit-based admissions.

Conclusions

This is more empirical validation of the unethical, self-serving, and political nature of Academy admissions. By avoiding merit-based admissions, West Point is discriminating for favored groups, causing higher attrition rates (unfair to both the separated cadets and the candidates who were passed over), and promoting a culture where careerism is favored over military effectiveness.

Further, it is lying to the public about its admissions practices. West Point implies that merit is *the* determining factor in admissions. In fact, it is not, for about 15-20% of a class. The admissions site should list two more ways to “get to the top of the pile”: Be a recruited athlete, or be “diverse.”

As always factual corrections and thoughtful criticism are welcome.

3 thoughts on “A Class Admitted on Merit”

  1. I believe that “merit” is a relative thing in the admissions process. And I do not see in the above commentary 1. How many applied for admission in each year, and in what categories 2. How many were accepted each year 3. How many actually enrolled 4. How did those who enrolled fit among those “classes” as you site 5. And how did attrition work out.
    6. Who is actually making these admission decisions and how are those decisions working out vis-a-vis performance and filling needs in the military i.e. is there accountability? Is there enough “guts” among those who make the decisions to do th

    Reply
    • Thanks for the comment. Our definition of merit here is as close as we can get it to the Academy’s: Whole Candidate Score. And we think this shows they choose to disregard it when convenient for other, not-related-to-winning-wars reasons. Our method of using qualified candidates also creates an apples-to-apples comparison of what classes the Academy could have created vs what they did create. Annual variation in yield should not affect the results we’re seeing year after year after year.

      Reply
  2. Hard to follow but no secret. Maybe the most surprising finding was the stat how this impacts women. Truly an unintended consequence.

    Reply

Leave a Reply to Mike Cero (@mscero)Cancel reply

Discover more from usmaData

Subscribe now to keep reading and get access to the full archive.

Continue reading