Register
A password will be e-mailed to you.

With Smash Summit 3 behind us and only a small handful of major events in the calendar year (shoutouts to UGC Open, Eden, Dreamhack Winter, and Don’t Park On The Grass), it’s almost time for everyone’s favorite and most controversial topic of the smash year: the MIOM Top 100 selection. Since its first iteration at the end of 2013, the Top 100 list has generated heated controversy, wild speculation, copious amounts of salt, and significant prestige. It is the standard by which all players hold themselves up to. Even those players who are most vocally opposed to its ranking still care enough about it to hold serious opinions on how the players are laid out. For some reason, even though there are now multiple varied methods of player ranking both objective and subjective, the MIOM list is the standard-bearer for where our top players stack up.

So for today, I would like to take a deeper look into how the Top 100 is compiled and ranked, to point out some of the more concerning flaws in the system, and take a peek into why smashers care more about that ranking than any other.

​After the success that Melee experienced in 2013, Melee It On Me contributor Daniel “Tafokints” Lee began work on the first North American ranking since the MLG days of the mid-2000s. It seemed a pertinent time to do it, as Melee experienced its largest growth in years and there were many names that needed recognition. Initial requirements to be considered were both lenient and strict: it simply required that a player compete in a US-based national. This was lenient in the sense that anyone who attended Evo was eligible, including many FGC first-comers who may have never played the game competitively before that weekend. Yet it was also deeply flawed; European greats such as Amsah and Zgetto were ineligible, as were regional “hidden bosses” such as Wife and Vudujin. They were unable to participate in nationals during that year and were ineligible as a result, even though the players just mentioned could have blown 90% of nominees out of the water. The result was the 2013 ranking, which was as groundbreaking as it was flawed.

​2014’s effort improved upon the previous year’s effort in a couple of notable ways, the most notable of which was the inclusion of regional tournaments in consideration for a player’s nomination. This allowed for the inclusion of hidden bosses such as NorCal’s Lucien and New York’s Lord HDL (this is the Link/Marth player, not the flashy Captain Falcon main). However, a major flaw still remained with the decision to not include any tournaments from outside the United States, and there were also very notable players left off of the nomination list by accident (most notably Professor Pro, whose 2014 track record included wins over 28th-ranked Kels and 65th-ranked Fuzzyness). Still, the top 100 provided seeding for tournaments in the first half of 2015 and was still the best ranking system the scene had to offer.

By the conclusion of 2014’s ranking period, an issue had presented itself: west coast America was wildly over-represented and the rest of the world felt very underrated. This was a problem on two fronts: since the Top 100 is compiled based on the opinions of a panel of influential community voices, there was very little objectivity to the results. This was made more problematic by the selection of panel members: while players were selected from all over the world, the region who was most responsible in returning their ranking to Tafokints was California. This meant that other regions’ voices were stifled and players who should have been ranked higher instead flew under the radar.

​This element of the Top 100 ranking did not change going into 2015’s ranking period, although with the massive boom in major tournaments the rules for nomination were changed to be more broadly representative. It was no longer tenable to include all participants at US national events, and even reducing the nominations to those who made bracket at those events flooded the list to an impossibly high number. As a result, events were compared to each other and top players were given a chance to be recognized regardless of their region. Top 64 at the year’s supermajors, top 24 at Dreamhack Winter (2015’s most competitive European major), top 16 at any other major, and top 8 at any European major qualified a player for nomination. 2015 also included a separate list of hidden bosses, which removed the clutter from the voting process even further and ensured players without that kind of global access could still be recognized. This included Brazilian player Aisengobay and Japanese player Flash. As a result of these changes to the system and an expansion of the panel to larger numbers than previous years, 2015 remained the most accurate assessment of global player skill into the first part of 2016.

​Preparations have already begun for 2016’s yearly ranking, and this year will be simultaneously easier and more difficult to assess than ever before. There will be over 25 major tournaments to consider from this year, including several invitationals. Players have more data points available than ever before…which is great for players who fall into a clear cut above the rest, but not conducive to ease for the lower ends of the top 100 ranking. This is one of the major issues with the Top 100 system: it gets much, much harder to get an accurate assessment of player skill as you move further away from the illustrious number 1 position. This year, for example, the number 1 spot will almost certainly be handed to either Armada, Hungrybox, or Mango, and at the conclusion of Don’t Park On The Grass (the last major for the year) that spot will have cemented itself. Positions 4 and 5 will go to a combination of Leffen and Mew2King, them being the only two other players to take sets from Armada in the year (which should really show just how big a deal it is to beat him), but as the ranking moves further down the list, there are more players vying for a smaller number of positions. Plup, SFAT, Westballz, and Axe will be fighting for positions 6 through 9, but the 10th through the 25th positions are a bloody melee of inconsistent placings, questionably seeded tournaments, and occasional breakout results. This is a field which includes the likes of Shroomed, Swedish Delight, Wizzrobe, Duck, Ice, Javi, Zhu, S2J, PewPewU, Lucky, The Moon, and a handful of other world-class players who are all striving to take that next big step. As we move further down the list, the ranking becomes even more difficult; how much of an empirical case can really be made that a player deserves to be ranked 100 over another 60 players who could all defeat him?

​The difficulties in getting an accurate assessment of our A-tier players has brought about an effort this year to get actually objective rankings worldwide. Systems such as ELO rankings, which are used to keep track of the various tiers of Chess skill, have been introduced in individual regions already. New Jersey is probably the most notable example of an ELO ratings system, although similar structures exist in the greater New England area as well as the EU. While it does help with getting a more accurate assessment of a player’s skill relative to those already on the list, these rankings also do not account for outsiders or inactive players. Take, for example, the SSBM Glicko ratings system, which has been a year-long experiment with objective results. While some results make more sense than others (Swedish Delight placing 8th should raise an eyebrow or two), it also does not include notable inactives Hax and PPMD, both of whom are easy top 10 contenders when they’re in practice. If a large-scale tournament were to seed only using objective results, these players could wind up in a very disadvantageous position. Imagine a scenario where Hax has to play Armada in round 1, only to slaughter 8 different people on his way back to a top 8 finish. This would bring the objective scores of all his victims down, and also skew the loser’s bracket in a very unbalanced direction.

The problem of international representation is also present in 2015’s system and will likely not be resolved in 2016’s either. While European tournaments are becoming more prestigious, they are also undeniably less competitive than their American counterparts. Not a single European tournament has featured all of the top 6 (or 5, RIP PPMD) in history, and when tournaments like Genesis 3 are able to boast that 81 out of the top 100 are in attendance, it makes it very difficult to take even the most competitive European tournaments as seriously. However, that does not mean that enormous strides haven’t been made this year. There have been a half-dozen European majors this year, all of them featuring at least one member of the top 10 from America, and with the wide proliferation of streaming the European players are getting more exposure than ever before. Players like Tekk, Trifasia, and Jeapie are well-known in the States, and as the European scene continues to expand, players will have more crossover than ever before.​

To address the concerns of bias towards one region or another, the MIOM panel will likely be expanded again this year, which can only be a good thing. However, one major problem exists with panel rankings, as anyone who has ever participated on one for their local PR can attest. There is inevitably an issue of cognitive bias whenever things are left to human beings. Some have previously lobbed accusations of bias at California players for being too west-coast centric, but this misses the wider issue: panel voting inevitably comes down to a battle of popularity in certain instances. This is where accusations of ‘Falcon bias’, or the panelists’ tendency to overrate Falcon players while underrating others (Sheik), comes from in large part. Players like The Moon, who became wildly popular with the global scene thanks to his appearance at Smash Summit 3, have the benefit of a positive image with those who know him. Whether consciously or not, a person’s individual perception of another can play a part in their ability to be objective about that person’s skill. The only way to truly counter this is by taking a large sample size, throwing out the outlying opinions, and averaging the results. To his credit, Tafokints has been extremely good about this, and with a larger panel in store for the coming year it is likely that the Top 100 of 2016 will be even more accurate than ever before.​

There are clearly significant issues and tradeoffs with the MIOM Top 100 ranking...so why is it so important to us? Everyone in every scene, whether that’s the 805 in SoCal or the quiet mountain towns of Colorado, has an opinion on who should be on that list at the end of the year. For some reason, that ranking means something to us smashers. There are a few reasons for this. The first is that, because it incorporates the opinions of some of the most notable and reputable TOs and figureheads in the scene (including Nintendude among others), it is one of the few things that all scenes can collectively come together on. The opinions of all of the most knowledgeable and dedicated players comes together and gets averaged out, which means that it’s not one person’s subjective opinion nor a wide community sampling. The top 100 is a collection of what our best minds have to say about our best players, pure and simple. Secondly, and this is kind of a self-fulfilling prophecy, the top 100 ranking determines seeding for the first half of the year’s major events. At Genesis 3, the top 64 seeds were floated out of pools and this was directly determined by their placing on 2015’s Top 100 list. While it feeds back into itself and makes the Top 100 ranking important by virtue of it being used, it also should not be underestimated. The Top 100 remains the steadfast TO seeding tool for most of the year. 

However, the strong possibility remains that the Top 100 list is exceptional because it gives players a goal to strive for. While an objective system like the Glicko ratings gives us a good indicator of player skill and what tier they belong in, the Top 100 also carries with it the idea of being a respected player. The input of major top players means that if a player is able to make it onto that list, then they have earned the respect of their peers. That’s not to say that they are being respected purely on skill; some players may be over-ranked because the are driving a new approach to the game that has never been seen before or implemented at such a high level. Others may simply be friends with the right people on the panel. Yet even the friendliest person in the scene cannot make the list on the strength of their people skills alone. The top 100 is an odd mix of player results, whether that player has made any notable upsets or suffered notable losses, and perception by players who are equal to or above those players in skill. Players across the country and world all have “I want to make the Top 100” as a personal goal, and more than anything else this indicates that the ranking, as subjective and flawed as it might be, is the most prestigious and important ranking our scene has.​

It’s a little bit like the Smash community’s version of the Academy Awards: everyone knows it’s a flawed system, and when Shakespeare in Love wins more votes than Saving Private Ryan, thousands of people are confused and even upset. Sometimes our players are recognized not for their results and how they actually stand in competition, but rather for what they represent and how likeable they are. Think of the posthumous award to Heath Ledger; perhaps he did deserve the nod, but anyone paying attention knew he would get the award the moment he passed away. Smash has its own Academy, its own panel of respected voices and opinions, and even though it is a flawed system that makes us shake our head in bewilderment every so often, the stark reality is that most players still dream of the day when they get to read the blurb detailing how great of a player the panel thinks them to be. No system of ranking can be perfect, and perhaps the reason the Top 100 is so important is precisely because of how subjective it is. It’s not a cold system of bars and numbers, it is truly a panel of recognition. If you make it onto that list, that means that somewhere in the wide world of Melee, people are paying attention.

About The Author

Josh Kassel
Smash Contributor

Related Posts