Today I finished putting together the first draft of the 2015 Pirates top 50 prospect list. The top 50 list is exclusive to the 2015 Prospect Guide, which you can currently pre-order on the products page of the site. As a result of this, plus getting the formatting of the book completed, I don’t really have an article topic tonight. So I thought I’d go into detail on the process of ranking prospects for the books, where we are returning a new ratings system for the second year in a row.
It starts with me putting a list together of every single prospect eligible player in the system (fewer than 130 at-bats, 50 innings pitched, or 30 appearances for relievers). From there, I give every prospect a 2-8 rating in three categories:
1. Likely Upside
The likely upside is exactly what it says, what the player will most likely become. The floor is where I see the player ending up if he doesn’t reach his upside, and the ceiling is the highest level I see the player reaching. Pretty basic stuff there. Here are the classifications for each number.
|6||Above-Average Starter / Strong #3 Innings Eater / Impact Closer|
|5||Average Major Leaguer / #3-5 SP / Closer Candidate|
|4||Impactful Bench Player / Spot Starter / Strong Middle Reliever|
|3||Up & Down Player|
Most players will have a floor of 2, and we rarely give a ceiling of 8. We don’t publish the floor and ceiling rankings for each player. Last year there were only two players who had a ceiling of 8: Gregory Polanco and Jameson Taillon. We’ve yet to have a player with a likely upside of 8, since that would be a once in a generation prospect. Polanco and Taillon were both rated 7’s last year.
After I’m done with my rankings, I send a list out for other people to complete the same process. I get mine finished first, because I like to avoid anyone else influencing the rankings, and I’ve found from past experience that when someone sends me their list, I want to check it out right away, even if I don’t have my list completed. John Dreker and Wilbur Miller both spend a lot of time during the season following the entire minor league system, so they both submit rankings on every player in the system. Then, we get rankings from local writers on their specific teams. For example, Ryan Palencer covers the Indianapolis Indians for us, so he only ranks prospects he saw with Indianapolis. I usually lean more on the local writers if there is a wide range in opinions for a specific player.
Every writer on the site, whether it’s John, Wilbur, myself, or the local guys, have been in contact with scouts throughout the year, and we incorporate that into the rankings, with a section for notes. It might not seem necessary for notes, since we publish so much content on the site. However, there is a lot of information we have on players that never reaches the site. Some of that is specifically for the book, and some of it is just content that didn’t make an article. Either way, it ends up in the book.
Once I have all of the rankings together, I average out each player’s upside, floor, and ceiling. The upside is the number we use in the book, although we don’t take the average as gospel. We’ll adjust each player individually. The floor and ceiling, along with the level the player was at in 2014, all make up the risk factor. A guy with a floor of “2” in A-ball is going to have “Extreme” risk most of the time. A guy with a floor of 5 in Triple-A is going to have “Low” risk.
After I get the averages together, I throw together a top 50 list, which is pretty simple with the upside and risk factors. I then send that list out, along with the ratings, to John and Wilbur, and get their comments on who should move up, who should move down, and why. The “why” is important, as our rankings differ, and I want to know that there is a good reason to move a guy, other than just a general disagreement. Most of the time, they’ll make good arguments, and the rankings will be adjusted. I’ll also adjust the rankings during the write-up process, where I do more research on a player, and might find that he should be rated higher/lower and graded differently. In short, today was the first ranking, and there will be about 1,000 adjustments over the next month before the book is complete, with the final adjustment being minutes before sending it to the publisher (based on experience).
Of course, those of you who have bought the book in the past know that I care much more about tiered rankings than I do a top 50 list, and that ranking system is much easier to put together. But a top 50 list sells.
As for the progress of the book, by the end of the week I will have everyone’s bio and contract info updated, along with the 2014 stats added to the book. This process is about as boring as it sounds, especially after the rankings, and will result in me watching a lot of Scandal and House of Cards (the two shows on Netflix that I’m planning to watch this off-season). After that, I’ll have about a month to finish the profiles before the book goes to publishing.
Maybe you found this interesting. Maybe you also took a night off from this article. I’ve had the book on my brain all day, and while I’ve got some article ideas, I couldn’t give them the proper attention tonight. I’ll be back to the regular schedule with one of those ideas tomorrow. Until then, if you haven’t pre-ordered your copy of the Prospect Guide, you should do so here.
Links and Notes