@Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion Also, can you explain this from the opening post - MLR is Winger - AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tells us?
It tells us that if in our testing league we set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating "an unrealistic" environment to test AGAINST, because it'll be a far away from the real game environment and there's no point in testing AGAINST such environment. It's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
And I brought it up to correct what @Orion said in his post below because the things aren't done he described
Orion said: I think he meant that FM Arena is a great attribute test, but is kind of unrealistic scenario. You have whole league with literally the same teams and one has all the players with 1 attribute change. That's far from 'real scenario' when, for the sake of argument, you usually have big CB and small striker. In FM Arena case you have for example team that has all players that have all 10s in attributes no matter the position and the opposing team that has all the players with all the 10s except one testing attribute. Here you have data based on players from 'regular' game so they are far more diverse. You could say that FM Arena is more like 'lab test', strict scenario, while this experiment is more like checking on real life population. Expand
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tell us?
It tells us if we in our testing league set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating such testing environment to test AGAINST that would be a far away from the real game environment. There's no point in testing AGAINST such environment because it's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
I hope it clears things.
Cheers. Expand
Hey @Zippo I replied to Orion, who I believe did say that he uses the top 100 players at each position to then determine the most important attributes for each position, if I understand it correctly
Sorry @Zippo I actually was replying to you but thought that was Orions post. Your answer cleared it up for me though because I had misunderstood. Are the main differences between the two tests that yours is important attributes across all positions whereas Orion has identified position specific attributes? I'm sure there are also differences in the methodology but these are the main outcome differences?
Edit - This comment by Orion also gives me more info
"In terms of rating - FM Arena test is based on team Points. harvestgreen22 test is using goal difference. And my model is using players rating as a target variable. Hence my model will look for attributes that increase rating, but not team performance per se. We know that in general average rating will be somehow connected with team results. And I know about a flaw of this model that since it looks for players for high rating it will usually prioritise offensive player but that's due to FM rating system."
Zippo might have a point but I had a save where I experimented with player ratings using cheating scouting programs and I've noticed much more often that Orion's ratings are closer to how players actually perform on that engine. Even the slightest difference in positional rating between players was very noticeable on the field. However, I would recommend using an Excel document rather than Genie Scout file because I think that its original rating formula is much more accurate
Just for the record, the fm-arena test isn't a 'lab test' or something like that.
In our test we try to recreate the real game environment as much as possible, otherwise there would no point in doing the test at all because anyway all the findings would not be working in the real game environment.
I want to stress that the attributes of the players in our test league weren't taken pulled out of thin air.
For example, the attributes of the Wingers in our testing league are based on the average attributes of the 100 highest CA wingers in the game.
The attributes of the Full Backs in our testing league are based on the average attributes of the 100 highest CA Full Backs in the game.
The attributes of the Central Defenders in our testing league are based on the average attributes of the 100 highest CA Defenders in the game.
And so on...
Speaking other words, if we take the 100 highest CA wingers in the game and take the average attributes of them then we'll get the attributes of the wingers in our testing league.
As you can see the attributes of the players in our test league represent "perfectly" what we have at the top level in the real game.
You might ask why we do only focus on the top level and I can answer you that we don't bother with testing "lower league" environment because the difficulty level in such environment is extremely low, I mean that any low league can be dominate with a very average tactic and a very basic managerial skill.
FM only gets challenging at the top level, I'm speaking about such high reputation leagues like English Primer League, Spanish La Liga, Italia Serie A, Bundesliga, Champions League and so on so we focus our testing on that kind of environment where people get challenged the most in FM.
I hope this helps to clear the things.
Cheers. Expand
Thanks for clarifying. My sincere apologies then. I thought that your test is somehow similar to what Zealand did. I thought your test leagues is all the same teams. I'm sorry if what I said is not true. This comes from a fact that I never saw things behind your test so I had to make assumptions. It's also not clear - at leas in Attribute Test main post - how the testing league is set up. We only see the results - and in some previous editions I think there were screens from the league results.
It's also interesting point that you average attributes for 100 best players in a certain position. It would be interesting to see if those attributes at the start of default database are vastly different from the one from the save let's say 20 years into future where all of the real players are replaced with Newgens. Not only to see how Newgens are differently built compared to real life players (because we know for example that WBs/FBs are lacking high crossing attribute values) but also since going long term saves possibly changing this average attribute values.
We learned that the ratings not purely depends on the attributes but it also greatly depends on other factors such the quality of the team and the tactic been used.
So if one striker gets better ratings than other striker then it doesn't necessary mean that he has better attributes, it could be due just him been in a better team or playing in such tactical approach that generates better ratings for the striker position. Expand
I realise that. That's why I take my coefficients with a huge grain of salt. I understand that issue - players being played in suboptimal tactic or even suboptimal position for them. My assumption is that sheer amount of data I had overcome those outliers but frankly you can never be 100% sure.
My wish feature to overcome that obstacle or uncertainty would be possibility to filter out players not by their 'natural' position but 'most played position'. This way we could get rid of scenario where, let's say for the sake of argument, we have a player that is natural as CB and STC. He plays as STC, scores a lot of goals so gets high average rating. But in my model he also shows up in CB lists because I filter based on players position - because sadly I have no other possibility - and thus influences coefficients for attributes required for CBs that we know are not true since he doesn't actually plays that position.
Middleweight165 said: @Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion Also, can you explain this from the opening post - MLR is Winger - AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS? Expand
Yes, exactly. Because in GS you have only 1 Winger tab and as we could see in the test ML/R and AML/R use slightly different attributes so I wanted to somehow fit this into GS ratings file.
@Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion
Also, can you explain this from the opening post
- MLR is Winger
- AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
@Middleweight165, you misunderstood what I was talking about.
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tells us?
It tells us that if in our testing league we set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating "an unrealistic" environment to test AGAINST, because it'll be a far away from the real game environment and there's no point in testing AGAINST such environment. It's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
And I brought it up to correct what @Orion said in his post below because the things aren't done he described
Orion said: I think he meant that FM Arena is a great attribute test, but is kind of unrealistic scenario. You have whole league with literally the same teams and one has all the players with 1 attribute change. That's far from 'real scenario' when, for the sake of argument, you usually have big CB and small striker. In FM Arena case you have for example team that has all players that have all 10s in attributes no matter the position and the opposing team that has all the players with all the 10s except one testing attribute.
Here you have data based on players from 'regular' game so they are far more diverse.
You could say that FM Arena is more like 'lab test', strict scenario, while this experiment is more like checking on real life population.
I hope it clears things.
Cheers.
Zippo said: @Middleweight165, you misunderstood what I was talking about.
I was talking about creating such testing environment to test AGAINST that would accurately represent the real game environment otherwise there's no point in testing because your findings won't work in the real game environment.
The game has an algorithm that distributes attributes for every position, it relates to both type of players such as existing players and generated players.
To understand how the attribute distribution algorithm works you can take a look on the attributes of the top 100 highest CA player for each position.
For example, if we take the top 100 highest CA Strikers in the game then we'll see that the average value for "Tackling" attribute is about "7" and the average value for "Finishing" attribute is about "15"
But if we take the top 100 highest CA Central Defenders in the game then we'll see that the average value for "Tackling" attribute is about "15" and the average value for "Finishing" attribute is about "7".
What does the above tell us?
It tells us if we in our testing league set "Tackling" attribute for the Strikers to be much higher than "7" or if we set "Finishing" attribute for the Central Defenders much higher that "7" then we are moving away form the real environment and creating such testing environment to test AGAINST that would be a far away from the real game environment. There's no point in testing AGAINST such environment because it's obvious that in the real game you won't encounter Central Defenders with "14" for Finishing attribute or Strikers with "14" for "Tackling" attribute.
Once more, I was talking about creating a test environment to test AGAINST. I didn't say that the top 100 highest CA players for every position are the best performances.
I hope it clears things.
Cheers.
Hey @Zippo I replied to Orion, who I believe did say that he uses the top 100 players at each position to then determine the most important attributes for each position, if I understand it correctly
Sorry @Zippo I actually was replying to you but thought that was Orions post. Your answer cleared it up for me though because I had misunderstood. Are the main differences between the two tests that yours is important attributes across all positions whereas Orion has identified position specific attributes? I'm sure there are also differences in the methodology but these are the main outcome differences?
Edit - This comment by Orion also gives me more info
"In terms of rating - FM Arena test is based on team Points. harvestgreen22 test is using goal difference. And my model is using players rating as a target variable. Hence my model will look for attributes that increase rating, but not team performance per se. We know that in general average rating will be somehow connected with team results. And I know about a flaw of this model that since it looks for players for high rating it will usually prioritise offensive player but that's due to FM rating system."
Zippo might have a point but I had a save where I experimented with player ratings using cheating scouting programs and I've noticed much more often that Orion's ratings are closer to how players actually perform on that engine. Even the slightest difference in positional rating between players was very noticeable on the field. However, I would recommend using an Excel document rather than Genie Scout file because I think that its original rating formula is much more accurate
Zippo said: Hi there,
Just for the record, the fm-arena test isn't a 'lab test' or something like that.
In our test we try to recreate the real game environment as much as possible, otherwise there would no point in doing the test at all because anyway all the findings would not be working in the real game environment.
I want to stress that the attributes of the players in our test league weren't taken pulled out of thin air.
For example, the attributes of the Wingers in our testing league are based on the average attributes of the 100 highest CA wingers in the game.
The attributes of the Full Backs in our testing league are based on the average attributes of the 100 highest CA Full Backs in the game.
The attributes of the Central Defenders in our testing league are based on the average attributes of the 100 highest CA Defenders in the game.
And so on...
Speaking other words, if we take the 100 highest CA wingers in the game and take the average attributes of them then we'll get the attributes of the wingers in our testing league.
As you can see the attributes of the players in our test league represent "perfectly" what we have at the top level in the real game.
You might ask why we do only focus on the top level and I can answer you that we don't bother with testing "lower league" environment because the difficulty level in such environment is extremely low, I mean that any low league can be dominate with a very average tactic and a very basic managerial skill.
FM only gets challenging at the top level, I'm speaking about such high reputation leagues like English Primer League, Spanish La Liga, Italia Serie A, Bundesliga, Champions League and so on so we focus our testing on that kind of environment where people get challenged the most in FM.
I hope this helps to clear the things.
Cheers.
Thanks for clarifying. My sincere apologies then. I thought that your test is somehow similar to what Zealand did. I thought your test leagues is all the same teams.
I'm sorry if what I said is not true. This comes from a fact that I never saw things behind your test so I had to make assumptions. It's also not clear - at leas in Attribute Test main post - how the testing league is set up. We only see the results - and in some previous editions I think there were screens from the league results.
It's also interesting point that you average attributes for 100 best players in a certain position. It would be interesting to see if those attributes at the start of default database are vastly different from the one from the save let's say 20 years into future where all of the real players are replaced with Newgens. Not only to see how Newgens are differently built compared to real life players (because we know for example that WBs/FBs are lacking high crossing attribute values) but also since going long term saves possibly changing this average attribute values.
Zippo said: What did we just learned here?
We learned that the ratings not purely depends on the attributes but it also greatly depends on other factors such the quality of the team and the tactic been used.
So if one striker gets better ratings than other striker then it doesn't necessary mean that he has better attributes, it could be due just him been in a better team or playing in such tactical approach that generates better ratings for the striker position.
I realise that. That's why I take my coefficients with a huge grain of salt. I understand that issue - players being played in suboptimal tactic or even suboptimal position for them. My assumption is that sheer amount of data I had overcome those outliers but frankly you can never be 100% sure.
My wish feature to overcome that obstacle or uncertainty would be possibility to filter out players not by their 'natural' position but 'most played position'. This way we could get rid of scenario where, let's say for the sake of argument, we have a player that is natural as CB and STC. He plays as STC, scores a lot of goals so gets high average rating. But in my model he also shows up in CB lists because I filter based on players position - because sadly I have no other possibility - and thus influences coefficients for attributes required for CBs that we know are not true since he doesn't actually plays that position.
Middleweight165 said: @Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion
Also, can you explain this from the opening post
- MLR is Winger
- AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
Yes, exactly. Because in GS you have only 1 Winger tab and as we could see in the test ML/R and AML/R use slightly different attributes so I wanted to somehow fit this into GS ratings file.