@Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion Also, can you explain this from the opening post - MLR is Winger - AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tells us?
It tells us that if in our testing league we set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating "an unrealistic" environment to test AGAINST, because it'll be a far away from the real game environment and there's no point in testing AGAINST such environment. It's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
And I brought it up to correct what @Orion said in his post below because the things aren't done he described
Orion said: I think he meant that FM Arena is a great attribute test, but is kind of unrealistic scenario. You have whole league with literally the same teams and one has all the players with 1 attribute change. That's far from 'real scenario' when, for the sake of argument, you usually have big CB and small striker. In FM Arena case you have for example team that has all players that have all 10s in attributes no matter the position and the opposing team that has all the players with all the 10s except one testing attribute. Here you have data based on players from 'regular' game so they are far more diverse. You could say that FM Arena is more like 'lab test', strict scenario, while this experiment is more like checking on real life population. Expand
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tell us?
It tells us if we in our testing league set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating such testing environment to test AGAINST that would be a far away from the real game environment. There's no point in testing AGAINST such environment because it's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
I hope it clears things.
Cheers. Expand
Hey @Zippo I replied to Orion, who I believe did say that he uses the top 100 players at each position to then determine the most important attributes for each position, if I understand it correctly
Sorry @Zippo I actually was replying to you but thought that was Orions post. Your answer cleared it up for me though because I had misunderstood. Are the main differences between the two tests that yours is important attributes across all positions whereas Orion has identified position specific attributes? I'm sure there are also differences in the methodology but these are the main outcome differences?
Edit - This comment by Orion also gives me more info
"In terms of rating - FM Arena test is based on team Points. harvestgreen22 test is using goal difference. And my model is using players rating as a target variable. Hence my model will look for attributes that increase rating, but not team performance per se. We know that in general average rating will be somehow connected with team results. And I know about a flaw of this model that since it looks for players for high rating it will usually prioritise offensive player but that's due to FM rating system."
Zippo might have a point but I had a save where I experimented with player ratings using cheating scouting programs and I've noticed much more often that Orion's ratings are closer to how players actually perform on that engine. Even the slightest difference in positional rating between players was very noticeable on the field. However, I would recommend using an Excel document rather than Genie Scout file because I think that its original rating formula is much more accurate
Just for the record, the fm-arena test isn't a 'lab test' or something like that.
In our test we try to recreate the real game environment as much as possible, otherwise there would no point in doing the test at all because anyway all the findings would not be working in the real game environment.
I want to stress that the attributes of the players in our test league weren't taken pulled out of thin air.
For example, the attributes of the Wingers in our testing league are based on the average attributes of the 100 highest CA wingers in the game.
The attributes of the Full Backs in our testing league are based on the average attributes of the 100 highest CA Full Backs in the game.
The attributes of the Central Defenders in our testing league are based on the average attributes of the 100 highest CA Defenders in the game.
And so on...
Speaking other words, if we take the 100 highest CA wingers in the game and take the average attributes of them then we'll get the attributes of the wingers in our testing league.
As you can see the attributes of the players in our test league represent "perfectly" what we have at the top level in the real game.
You might ask why we do only focus on the top level and I can answer you that we don't bother with testing "lower league" environment because the difficulty level in such environment is extremely low, I mean that any low league can be dominate with a very average tactic and a very basic managerial skill.
FM only gets challenging at the top level, I'm speaking about such high reputation leagues like English Primer League, Spanish La Liga, Italia Serie A, Bundesliga, Champions League and so on so we focus our testing on that kind of environment where people get challenged the most in FM.
I hope this helps to clear the things.
Cheers. Expand
Thanks for clarifying. My sincere apologies then. I thought that your test is somehow similar to what Zealand did. I thought your test leagues is all the same teams. I'm sorry if what I said is not true. This comes from a fact that I never saw things behind your test so I had to make assumptions. It's also not clear - at leas in Attribute Test main post - how the testing league is set up. We only see the results - and in some previous editions I think there were screens from the league results.
It's also interesting point that you average attributes for 100 best players in a certain position. It would be interesting to see if those attributes at the start of default database are vastly different from the one from the save let's say 20 years into future where all of the real players are replaced with Newgens. Not only to see how Newgens are differently built compared to real life players (because we know for example that WBs/FBs are lacking high crossing attribute values) but also since going long term saves possibly changing this average attribute values.
We learned that the ratings not purely depends on the attributes but it also greatly depends on other factors such the quality of the team and the tactic been used.
So if one striker gets better ratings than other striker then it doesn't necessary mean that he has better attributes, it could be due just him been in a better team or playing in such tactical approach that generates better ratings for the striker position. Expand
I realise that. That's why I take my coefficients with a huge grain of salt. I understand that issue - players being played in suboptimal tactic or even suboptimal position for them. My assumption is that sheer amount of data I had overcome those outliers but frankly you can never be 100% sure.
My wish feature to overcome that obstacle or uncertainty would be possibility to filter out players not by their 'natural' position but 'most played position'. This way we could get rid of scenario where, let's say for the sake of argument, we have a player that is natural as CB and STC. He plays as STC, scores a lot of goals so gets high average rating. But in my model he also shows up in CB lists because I filter based on players position - because sadly I have no other possibility - and thus influences coefficients for attributes required for CBs that we know are not true since he doesn't actually plays that position.
Middleweight165 said: @Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion Also, can you explain this from the opening post - MLR is Winger - AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS? Expand
Yes, exactly. Because in GS you have only 1 Winger tab and as we could see in the test ML/R and AML/R use slightly different attributes so I wanted to somehow fit this into GS ratings file.
I made a modified spreadsheet, color-coded the coefficients, and added percentages to see the differences between every attribute. The FMTweak section also has a column to compare the coefficient vs the CA weighting. Link
No Weight means that you can set it to 20 and it'll still have no impact on CA, making the stats super OP to buff with IGE.
There is something that I don't understand, so maybe you could clear this up. I am looking at your Classic ME numbers and there you have for the GK position on Throwing the number -0,00331. However, when I check the Genie scout file for this, it shows throwing as 51, which is higher then decisions, which is on 35.
How is throwing on 51, when the initial number is a negative number? Is this just a mistake or am I missing something here?
flob said: There is something that I don't understand, so maybe you could clear this up. I am looking at your Classic ME numbers and there you have for the GK position on Throwing the number -0,00331. However, when I check the Genie scout file for this, it shows throwing as 51, which is higher then decisions, which is on 35.
How is throwing on 51, when the initial number is a negative number? Is this just a mistake or am I missing something here? Expand
-0,00331 is the coefficient for TRO, and TRO is not throwing. It is explained later in the opening post in the Disclaimer section:
Orion said: TRO is 'Rushing out (Tendency) - it's just how FM names that attribute when exporting. Expand
FREVKY said: That's what I thought, so I created pretty simple excel spreadsheet that make mass player comparison possible and easy.
Here's how it works: First, you need to import specified views. I created two sets of these: one for your team (for squad view) and one for scouting tab - both with CA & PA hidden - and another set with CA & PA visible for those who like to spoil the fun a little bit. Download and paste them into "views" folder in your Documents (C:\Users\your_name\Documents\Sports Interactive\Football Manager 2024\views is the default path).
When you load the view, you need to select every player, so click on one player and than ctrl+a to select everyone in the team or in the scouting range. Just bear in mind the more players you select, the more time it takes, so if you're about to select over 1000 players, give it a few seconds to work. Then, press cltr+p to "print" the selection into HTML file. Save it wherever you want, name it whatever you want. Then, you need the spreadsheet (MS Excel file). Open it and then in the Excel go to to File->Open and select the html file with your set of players. Copy it's whole contents (ctrl+a, then ctrl+c) and paste them into my spreadsheet in the blank sheet called "IMPORT" then switch the sheet to the one called "CALCULATION" and it should automatically calculate values for every player for each position using coefficients from this thread. Additionalli I added sections with CA, PA and difference between them (it will work only when you used views with PA and CA obviously).
Of course you can use whatever filters you want on the scouting section to narrow down the amount of players to whatever you really need.
At first glance it may sound complicated a bit but it's pretty easy to use. If you find any trouble using it, I'll try to help.
The spreadsheet is editable so do whatever you want with it, if you find any room for improvements, go for it. Expand
Interesting spreadsheet. I gave it a try and funny enough, when I put my own squad in and sort from Z to A on strikers (ST), the top 3 are all defenders lol I guess that happens?
flob said: Interesting spreadsheet. I gave it a try and funny enough, when I put my own squad in and sort from Z to A on strikers (ST), the top 3 are all defenders lol I guess that happens? Expand
Yeah, but that's based purely on ingame attributes. Game engine kind of penalize you for selecting players out of their natural / trained positions and I didn't implement that in my spreadsheet obviously as it's not confirmed how big out of position penalty is.
FREVKY said: Yeah, but that's based purely on ingame attributes. Game engine kind of penalize you for selecting players out of their natural / trained positions and I didn't implement that in my spreadsheet obviously as it's not confirmed how big out of position penalty is. Expand
Yea I understand. It's good to still have to use your brain a little bit. Definitely gonna try it out, specially now Genie Scout isn't working after the update.
May I ask how you deal with potential future wonderkids? As in, Genie scout for example has colomns for their potential score so I have an idea on how someone can be in the future, before I bench/sell the wrong one
FREVKY said: Yeah, but that's based purely on ingame attributes. Game engine kind of penalize you for selecting players out of their natural / trained positions and I didn't implement that in my spreadsheet obviously as it's not confirmed how big out of position penalty is. Expand
Thats confirmed already.. Zippo already made that test and, naturally, the natural position is the best one and i remember there was a gap to the other position conditions
@Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion
Also, can you explain this from the opening post
- MLR is Winger
- AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
@Middleweight165, you misunderstood what I was talking about.
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tells us?
It tells us that if in our testing league we set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating "an unrealistic" environment to test AGAINST, because it'll be a far away from the real game environment and there's no point in testing AGAINST such environment. It's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
And I brought it up to correct what @Orion said in his post below because the things aren't done he described
Orion said: I think he meant that FM Arena is a great attribute test, but is kind of unrealistic scenario. You have whole league with literally the same teams and one has all the players with 1 attribute change. That's far from 'real scenario' when, for the sake of argument, you usually have big CB and small striker. In FM Arena case you have for example team that has all players that have all 10s in attributes no matter the position and the opposing team that has all the players with all the 10s except one testing attribute.
Here you have data based on players from 'regular' game so they are far more diverse.
You could say that FM Arena is more like 'lab test', strict scenario, while this experiment is more like checking on real life population.
I hope it clears things.
Cheers.
Zippo said: @Middleweight165, you misunderstood what I was talking about.
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tell us?
It tells us if we in our testing league set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating such testing environment to test AGAINST that would be a far away from the real game environment. There's no point in testing AGAINST such environment because it's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
I hope it clears things.
Cheers.
Hey @Zippo I replied to Orion, who I believe did say that he uses the top 100 players at each position to then determine the most important attributes for each position, if I understand it correctly
Sorry @Zippo I actually was replying to you but thought that was Orions post. Your answer cleared it up for me though because I had misunderstood. Are the main differences between the two tests that yours is important attributes across all positions whereas Orion has identified position specific attributes? I'm sure there are also differences in the methodology but these are the main outcome differences?
Edit - This comment by Orion also gives me more info
"In terms of rating - FM Arena test is based on team Points. harvestgreen22 test is using goal difference. And my model is using players rating as a target variable. Hence my model will look for attributes that increase rating, but not team performance per se. We know that in general average rating will be somehow connected with team results. And I know about a flaw of this model that since it looks for players for high rating it will usually prioritise offensive player but that's due to FM rating system."
Zippo might have a point but I had a save where I experimented with player ratings using cheating scouting programs and I've noticed much more often that Orion's ratings are closer to how players actually perform on that engine. Even the slightest difference in positional rating between players was very noticeable on the field. However, I would recommend using an Excel document rather than Genie Scout file because I think that its original rating formula is much more accurate
Zippo said: Hi there,
Just for the record, the fm-arena test isn't a 'lab test' or something like that.
In our test we try to recreate the real game environment as much as possible, otherwise there would no point in doing the test at all because anyway all the findings would not be working in the real game environment.
I want to stress that the attributes of the players in our test league weren't taken pulled out of thin air.
For example, the attributes of the Wingers in our testing league are based on the average attributes of the 100 highest CA wingers in the game.
The attributes of the Full Backs in our testing league are based on the average attributes of the 100 highest CA Full Backs in the game.
The attributes of the Central Defenders in our testing league are based on the average attributes of the 100 highest CA Defenders in the game.
And so on...
Speaking other words, if we take the 100 highest CA wingers in the game and take the average attributes of them then we'll get the attributes of the wingers in our testing league.
As you can see the attributes of the players in our test league represent "perfectly" what we have at the top level in the real game.
You might ask why we do only focus on the top level and I can answer you that we don't bother with testing "lower league" environment because the difficulty level in such environment is extremely low, I mean that any low league can be dominate with a very average tactic and a very basic managerial skill.
FM only gets challenging at the top level, I'm speaking about such high reputation leagues like English Primer League, Spanish La Liga, Italia Serie A, Bundesliga, Champions League and so on so we focus our testing on that kind of environment where people get challenged the most in FM.
I hope this helps to clear the things.
Cheers.
Thanks for clarifying. My sincere apologies then. I thought that your test is somehow similar to what Zealand did. I thought your test leagues is all the same teams.
I'm sorry if what I said is not true. This comes from a fact that I never saw things behind your test so I had to make assumptions. It's also not clear - at leas in Attribute Test main post - how the testing league is set up. We only see the results - and in some previous editions I think there were screens from the league results.
It's also interesting point that you average attributes for 100 best players in a certain position. It would be interesting to see if those attributes at the start of default database are vastly different from the one from the save let's say 20 years into future where all of the real players are replaced with Newgens. Not only to see how Newgens are differently built compared to real life players (because we know for example that WBs/FBs are lacking high crossing attribute values) but also since going long term saves possibly changing this average attribute values.
Zippo said: What did we just learned here?
We learned that the ratings not purely depends on the attributes but it also greatly depends on other factors such the quality of the team and the tactic been used.
So if one striker gets better ratings than other striker then it doesn't necessary mean that he has better attributes, it could be due just him been in a better team or playing in such tactical approach that generates better ratings for the striker position.
I realise that. That's why I take my coefficients with a huge grain of salt. I understand that issue - players being played in suboptimal tactic or even suboptimal position for them. My assumption is that sheer amount of data I had overcome those outliers but frankly you can never be 100% sure.
My wish feature to overcome that obstacle or uncertainty would be possibility to filter out players not by their 'natural' position but 'most played position'. This way we could get rid of scenario where, let's say for the sake of argument, we have a player that is natural as CB and STC. He plays as STC, scores a lot of goals so gets high average rating. But in my model he also shows up in CB lists because I filter based on players position - because sadly I have no other possibility - and thus influences coefficients for attributes required for CBs that we know are not true since he doesn't actually plays that position.
Middleweight165 said: @Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion
Also, can you explain this from the opening post
- MLR is Winger
- AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
Yes, exactly. Because in GS you have only 1 Winger tab and as we could see in the test ML/R and AML/R use slightly different attributes so I wanted to somehow fit this into GS ratings file.
I made a modified spreadsheet, color-coded the coefficients, and added percentages to see the differences between every attribute. The FMTweak section also has a column to compare the coefficient vs the CA weighting.
Link
No Weight means that you can set it to 20 and it'll still have no impact on CA, making the stats super OP to buff with IGE.
There is something that I don't understand, so maybe you could clear this up. I am looking at your Classic ME numbers and there you have for the GK position on Throwing the number -0,00331. However, when I check the Genie scout file for this, it shows throwing as 51, which is higher then decisions, which is on 35.
How is throwing on 51, when the initial number is a negative number? Is this just a mistake or am I missing something here?
flob said: There is something that I don't understand, so maybe you could clear this up. I am looking at your Classic ME numbers and there you have for the GK position on Throwing the number -0,00331. However, when I check the Genie scout file for this, it shows throwing as 51, which is higher then decisions, which is on 35.
How is throwing on 51, when the initial number is a negative number? Is this just a mistake or am I missing something here?
-0,00331 is the coefficient for TRO, and TRO is not throwing. It is explained later in the opening post in the Disclaimer section:
Orion said: TRO is 'Rushing out (Tendency) - it's just how FM names that attribute when exporting.
FREVKY said: That's what I thought, so I created pretty simple excel spreadsheet that make mass player comparison possible and easy.
I guess that happens?
Here's how it works:
First, you need to import specified views. I created two sets of these: one for your team (for squad view) and one for scouting tab - both with CA & PA hidden - and another set with CA & PA visible for those who like to spoil the fun a little bit. Download and paste them into "views" folder in your Documents (C:\Users\your_name\Documents\Sports Interactive\Football Manager 2024\views is the default path).
When you load the view, you need to select every player, so click on one player and than ctrl+a to select everyone in the team or in the scouting range. Just bear in mind the more players you select, the more time it takes, so if you're about to select over 1000 players, give it a few seconds to work.
Then, press cltr+p to "print" the selection into HTML file. Save it wherever you want, name it whatever you want.
Then, you need the spreadsheet (MS Excel file). Open it and then in the Excel go to to File->Open and select the html file with your set of players. Copy it's whole contents (ctrl+a, then ctrl+c) and paste them into my spreadsheet in the blank sheet called "IMPORT" then switch the sheet to the one called "CALCULATION" and it should automatically calculate values for every player for each position using coefficients from this thread. Additionalli I added sections with CA, PA and difference between them (it will work only when you used views with PA and CA obviously).
Of course you can use whatever filters you want on the scouting section to narrow down the amount of players to whatever you really need.
At first glance it may sound complicated a bit but it's pretty easy to use. If you find any trouble using it, I'll try to help.
Spreadsheet link: https://www.mediafire.com/file/huj2qrmavoqnd6x/meta.xlsx/file
Updated spreadsheet including square root formula for M LR and ST: https://www.mediafire.com/file/j2rh3e6vk2hjw7i/meta.xlsx/file
The spreadsheet is editable so do whatever you want with it, if you find any room for improvements, go for it.
Interesting spreadsheet. I gave it a try and funny enough, when I put my own squad in and sort from Z to A on strikers (ST), the top 3 are all defenders lol
flob said: Interesting spreadsheet. I gave it a try and funny enough, when I put my own squad in and sort from Z to A on strikers (ST), the top 3 are all defenders lol
I guess that happens?
Yeah, but that's based purely on ingame attributes. Game engine kind of penalize you for selecting players out of their natural / trained positions and I didn't implement that in my spreadsheet obviously as it's not confirmed how big out of position penalty is.
FREVKY said: Yeah, but that's based purely on ingame attributes. Game engine kind of penalize you for selecting players out of their natural / trained positions and I didn't implement that in my spreadsheet obviously as it's not confirmed how big out of position penalty is.

Yea I understand. It's good to still have to use your brain a little bit. Definitely gonna try it out, specially now Genie Scout isn't working after the update.
May I ask how you deal with potential future wonderkids? As in, Genie scout for example has colomns for their potential score so I have an idea on how someone can be in the future, before I bench/sell the wrong one
FREVKY said: Yeah, but that's based purely on ingame attributes. Game engine kind of penalize you for selecting players out of their natural / trained positions and I didn't implement that in my spreadsheet obviously as it's not confirmed how big out of position penalty is.
Thats confirmed already.. Zippo already made that test and, naturally, the natural position is the best one and i remember there was a gap to the other position conditions
https://fm-arena.com/table/19-fm23-playing-position-ratings/