Just to add to the topic. CA is relatively weak indicator because it's influenced by the attributes related to the player's natural position - and those often don't come along with meta-attributes. CA is more related to certain attributes distribution than their actual effect on the give position.
Another example is that Jumping Reach has very low weight for strikers so if we have 2 strikers, both with almost the same CA one can have 20 Jumping Reach and the other 1. Make a guess which one will score more BY A LOT.
Example weights of attributes for each position in FM21:
There was some chinese experiment where guy made a team consisting of 'artificial' players with only 1 CA and won the Prem, simply because he abused attribute distribution that is related to the CA.
recoba said: Ok, thanks for getting back to me. I still think there might be some value above 8 attributes. I see what you're saying about the lesser ones making 1% of impact which wouldn't be worth it. But let's say for argument's sake we take the next 3 highest factors for a striker (right now #8 is Composure with 45%), so #9 could be Technique with 36%, #10 Long Shots with 33% and #11 is Passing with 30%. If Jumping is most impactful at 100%, then a player that has Jumping 10 but the 3 I mentioned at 16 would be as effective (in theory) as one with 10 in the ones I mentioned and 16 in Jumping.
A list of top 15 or 20 attributes for each position would be very useful I think!
Anyway, I understand if you're too busy because you're doing this out of your own time to share with the community. If you can't I think I'll tweak your general results from the FM23 test and add them in (the impactful ones).
I'm thinking about this more in order to 'optimize' finding the best player out of curiosity, not just to try to win more games in FM. I use them with genie scout ratings so it's not a lot of trouble for me to insert more attributes (I don't use spreadsheet or a visualizer). Expand
I'm expecting to have more free time next month so maybe I'll be able to do something about it. My long-term plan was to also do this analysis for other FM editions I have in my possession at Steam but as said it takes quite some time just to process the seasons and believe me, even thou devs didn't brag about it they optimised the game by A LOT and processing speed between FM24 and FM23 is significant. Can't imagine how slow will it be for older ones.
The data is there and it's possible to just run the analysis using only visible attributes from player's profile (so Technical, Mental, Physical - excluding set pieces/penalty takes). As always it's matter of free time.
flob said: I understand. So now, after reading your reply, I decided to try the formula manually for 2 players and with doing it manually, my main AML player (Strjacki) scores higher then the backup player (Lemmens). This is correct. Genie Scout however, with your file, rates it the way around with a massive gap. I'll post some screenshots so you can have a look as you created the Genie Scout file.
Strjacki - main AML
Lemmens - backup AML
Genie Scout picture 1
Genie Scout picture 2
You can see in genie scout picture 2 that I have the colomn sorting at Fast Striker and in picture 1 in combination with 2 where my main AML is listed, including the % compaired to other players, including my backup player Lemmens. It's very strange that Strjacki only scores 66,51% on your Genie Scout File. See the following picture for the FS ratings, just to be sure.
Genie Scout ratings picture for Fast Striker Expand
I tried to replicate your case and I simply couldn't.
I've changed attributes of two players to match the ones of your players:
Player 1
Player 2
And the results for their GenieScout ratings are as shown:
So I have frankly no idea why you have those results.
My only wild guess is that maybe beside the ratings we can set, Genie takes into account some other factors into 'final rating'. As you can see I literally copied all attributes of your players to my players. So they have in theory attributes that should be suitable for AMLR yet they have highest rating in their respective 'original' positions (CD for Testing 1 and DMC for Testing 2) and their ratings for those positions is MUCH higher than it should be simply by taking only the attributes.
recoba said: Great work @Orion , I'm happy I now have attribute strength per position available to use! A few have already asked about a larger list of attributes. I'm wondering how much effort it would be to do your tests/calculations for more attributes? I'm thinking this because the 8th attribute for some position is giving in the 30%s percent. I feel like if the 9th is 20% that is still impactful on the player's 'ability'.
You asked where to draw the line? If you'd be able to do the calculation and leave out anything under 10% that would be amazing!
I'm just a bit ocd with stuff like this so I'd love it if I had a more complete genie ratings file.
I'd be happy to create the genie rating file after and share it. Expand
Tl;dr I can do it and I did it before for 'general model' but it's pointless since after a few attributes the rest become basically a noise in the data.
Also in data analysis and especially in model development not always the more is better. Sometimes you use less features to 'generalise' model so it actually fits better. Another thing is it was meant to use mostly inside the game as an indicator about core attributes. I don't expect anyone to compare players based on 50 attributes. The 6-8 is doable even for 'visual' comparison.
flob said: Alright, thanks for clarifying that up. However, there is something massively wrong then with the classic genie file. My best AML player is 15th on the Fast Striker in Genie Scout where on the Excel Spreadsheet he is one of the best (2nd). I am aware of some players being DC and showing up on top for other roles, but 15th with other AMLR players above him that are worse means something is wrong there. Expand
I didn't paid attention how is this spreadsheet made. I didn't make it so I don't know how it works. Maybe it uses different coefficients since as far as I remember I updated them once (when I removed set pieces).
derek said: Thank you for your exciting work. It really helps me a lot.
There is now a updated version of the Chinese Engine, as attached.
Is it possible for you to test this engine, and also make a GS rating file? That would be very appreciated. Expand
If I'll have anytime soon I may take a look at it. I've just finished another testing with different match engine. The data is already there I just don't have time to get through it - and I have some new ideas to check for more diverse results.
To put it into perspective on my mediocre PC it takes around 4hrs to simulate a single season for those test and I usually try to get around 10 seasons per match engine.
flob said: I was re-reading everything and a new question popped up in my head. Does Inside Winger fall under AMLR? That's how I treated it, but if it's under MLR then I've been looking wrongly Expand
In test there is no indicator about which player plays which role. They are only distinguished by their natural positions - my great wish would be possibility to filter them out by 'most played position' and not just 'position they can play'. This induces some errors since for extreme example if a player can play naturally as a Striker and CB he'll appear in both categories. My best hope is that in the sheer mass of the players such outliers will be outweighed.
In GenieScout rating files there is no AMLR (only 'regular' Winger, no matter the position) so their rating is set for group 'Fast Striker'.
harvestgreen22 said: I also tested a lot of other tactics, and the results showed that if you don 't do the' extreme Meta ', say, I cancelled 'tackling -- Get stuck in', These meta tactics, against opponents such as MARCO ROSE, DIEGO SIMEONE, and STEFANO PIOLI, have a "loss rate" that is 10-20% higher than the "win rate", that is, they are overwhelmed Expand
I know it's not always THAT simple but can we make a rule of thumb that many 'semi-decent' tactics (or just self made) will improve significantly just by turning on 'Get stuck in' and probably using for most of the team Player's Instruction 'Tackle Harder' - since most meta tactics use both of these features?
So if your tactic is not total garbage that doesn't make sens, just tick press more, get stuck in, tackle harder and you're ready to overperform.
Middleweight165 said: @Zippo Why is it important to base the testing on the CA rating of the top 100 players for each position? Don't we know that the highest top 100 CA rated players are not the top 100 performing players in the game?
@Orion Also, can you explain this from the opening post - MLR is Winger - AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS? Expand
Yes, exactly. Because in GS you have only 1 Winger tab and as we could see in the test ML/R and AML/R use slightly different attributes so I wanted to somehow fit this into GS ratings file.
We learned that the ratings not purely depends on the attributes but it also greatly depends on other factors such the quality of the team and the tactic been used.
So if one striker gets better ratings than other striker then it doesn't necessary mean that he has better attributes, it could be due just him been in a better team or playing in such tactical approach that generates better ratings for the striker position. Expand
I realise that. That's why I take my coefficients with a huge grain of salt. I understand that issue - players being played in suboptimal tactic or even suboptimal position for them. My assumption is that sheer amount of data I had overcome those outliers but frankly you can never be 100% sure.
My wish feature to overcome that obstacle or uncertainty would be possibility to filter out players not by their 'natural' position but 'most played position'. This way we could get rid of scenario where, let's say for the sake of argument, we have a player that is natural as CB and STC. He plays as STC, scores a lot of goals so gets high average rating. But in my model he also shows up in CB lists because I filter based on players position - because sadly I have no other possibility - and thus influences coefficients for attributes required for CBs that we know are not true since he doesn't actually plays that position.
Just for the record, the fm-arena test isn't a 'lab test' or something like that.
In our test we try to recreate the real game environment as much as possible, otherwise there would no point in doing the test at all because anyway all the findings would not be working in the real game environment.
I want to stress that the attributes of the players in our test league weren't taken pulled out of thin air.
For example, the attributes of the Wingers in our testing league are based on the average attributes of the 100 highest CA wingers in the game.
The attributes of the Full Backs in our testing league are based on the average attributes of the 100 highest CA Full Backs in the game.
The attributes of the Central Defenders in our testing league are based on the average attributes of the 100 highest CA Defenders in the game.
And so on...
Speaking other words, if we take the 100 highest CA wingers in the game and take the average attributes of them then we'll get the attributes of the wingers in our testing league.
As you can see the attributes of the players in our test league represent "perfectly" what we have at the top level in the real game.
You might ask why we do only focus on the top level and I can answer you that we don't bother with testing "lower league" environment because the difficulty level in such environment is extremely low, I mean that any low league can be dominate with a very average tactic and a very basic managerial skill.
FM only gets challenging at the top level, I'm speaking about such high reputation leagues like English Primer League, Spanish La Liga, Italia Serie A, Bundesliga, Champions League and so on so we focus our testing on that kind of environment where people get challenged the most in FM.
I hope this helps to clear the things.
Cheers. Expand
Thanks for clarifying. My sincere apologies then. I thought that your test is somehow similar to what Zealand did. I thought your test leagues is all the same teams. I'm sorry if what I said is not true. This comes from a fact that I never saw things behind your test so I had to make assumptions. It's also not clear - at leas in Attribute Test main post - how the testing league is set up. We only see the results - and in some previous editions I think there were screens from the league results.
It's also interesting point that you average attributes for 100 best players in a certain position. It would be interesting to see if those attributes at the start of default database are vastly different from the one from the save let's say 20 years into future where all of the real players are replaced with Newgens. Not only to see how Newgens are differently built compared to real life players (because we know for example that WBs/FBs are lacking high crossing attribute values) but also since going long term saves possibly changing this average attribute values.
I understand this puts the better players in real life withy higher ratings but is this then not inferior to the ratings file based off the meta attributes? Expand
In CM times training exercises were a text file with just number assigned to the categories, for example: Offensive: (1-7) Defensive: (1-7) so you could make your own 'exercise' that maximise improvement in all areas without much fatigue.
Jolt said: Orion, do you happen to have the full results for all attributes in all positions for the standard engine, instead of just the top 10? Expand
Nope. I just ran the model for 8 iterations for every position. As said somewhere before - using all attributes is quite useless since a lot of those attributes will have much lower coefficients that don't bring much to the final result. You can see sample result for the FM23 model for the strikers using all features used here. As you can see around 6 features are quite important and then the rest is lower and lower and then the last ones are basically 'noise' in the data.
Yarema said: B teams in non playable leagues actually do play official matches, that is why the assistant isn't scheduling friendlies for majority of the season (only preseason and winter break). You can see the players are getting competitive league appearances but you cannot see the actual matches or schedule.
The issue with B teams is that they are mostly semipro which is kind of awful for development. Different countries do B teams differently though. For example in France a B team is part of the main club, sharing professional status. In most other countries B team is an affiliate. Expand
I am ware of this feature that the team can play games that are no in the schedule. I'm talking here about a case where B team have no games at all.
So basically B Teams that are in non-playable league and play no official matches can be effectively use for player's growth if we just set assistant to 'Arrange a fixture if there is no match in the week'.
As always thank you very much for providing crucial evidence!
ClaudeJ said: I did, and even quoted it a few posts above. You may have missed that context clue. I may not have fully grasped it the way you intended.
Anyway, I understand I'm not the target audience of your work, and I thank you for taking the time to elaborate.
Cheers Expand
Then you'd noticed for example the part with 1000 minutes required to be part of the training data set and combining league rank into players data.
The goal was to make rather universal model that will be a more of a guidance for the player what attributes look for than some absolute solution to players selection. I did it mostly for myself out of curiosity and since I already did the work I've decided to share it with the community so others can benefit from it. And then I've decided to update with it attributes for each position (because first model was for all outfield players) and then expanded it with custom match engines.
I'm currently not planning on doing another models dedicated for top leagues because I believe they won't be that different from general models.
I highly recommend you reading about methodology of this research - the source method posted on SI forum - because clearly you haven't read it.
ClaudeJ said: My main concern is that the average includes non-playing players, possibly those not even on the squad roster. Expand
It definitely does not because: 1) players that don't play won't have average rating at all - so the target value 2) part of players filtering contains minimum of 1000 minutes played in the league in the season
ClaudeJ said: PS: ingame, each competition has a reputation value, regardless of its continent, and that could serve as a global relative ranking. Expand
Each player has added a custom feature that represents their respective league position (that is based on the league reputation) in Europe. Every player attributes are 'transformed' to not use 'absolute' value of the attribute but it shows difference between players actual value and the average value for the league his playing in, for every single attribute considered.
ClaudeJ said: I understand that refining the dataset would require a lot of additional work, but I truly believe it would better serve its intended purpose by providing a clearer and more accurate representation of attribute importance at a competitive level. Expand
Not everyone plays with top leagues - I do not. I never in FM era played with a team in top 5 leagues. I'd also make a bet that among people who lurk on this forum there is higher share of players playing in lower leagues than in general population of FM players. Standardisation should compensate for the 'relativeness' of attributes. Please read those topics mentioned before. It would answer a lot of your questions and concerns.
Example - Determination has literally 0 'weight' (not matter the natural position) so is not influenced/does not influence CA AT ALL yet Determination has an impact on player's performance as shown here.
Another example is that Jumping Reach has very low weight for strikers so if we have 2 strikers, both with almost the same CA one can have 20 Jumping Reach and the other 1. Make a guess which one will score more BY A LOT.
Example weights of attributes for each position in FM21:
There was some chinese experiment where guy made a team consisting of 'artificial' players with only 1 CA and won the Prem, simply because he abused attribute distribution that is related to the CA.
A list of top 15 or 20 attributes for each position would be very useful I think!
Anyway, I understand if you're too busy because you're doing this out of your own time to share with the community. If you can't I think I'll tweak your general results from the FM23 test and add them in (the impactful ones).
I'm thinking about this more in order to 'optimize' finding the best player out of curiosity, not just to try to win more games in FM. I use them with genie scout ratings so it's not a lot of trouble for me to insert more attributes (I don't use spreadsheet or a visualizer).
I'm expecting to have more free time next month so maybe I'll be able to do something about it.
My long-term plan was to also do this analysis for other FM editions I have in my possession at Steam but as said it takes quite some time just to process the seasons and believe me, even thou devs didn't brag about it they optimised the game by A LOT and processing speed between FM24 and FM23 is significant. Can't imagine how slow will it be for older ones.
The data is there and it's possible to just run the analysis using only visible attributes from player's profile (so Technical, Mental, Physical - excluding set pieces/penalty takes). As always it's matter of free time.
Strjacki - main AML
Lemmens - backup AML
Genie Scout picture 1
Genie Scout picture 2
You can see in genie scout picture 2 that I have the colomn sorting at Fast Striker and in picture 1 in combination with 2 where my main AML is listed, including the % compaired to other players, including my backup player Lemmens. It's very strange that Strjacki only scores 66,51% on your Genie Scout File. See the following picture for the FS ratings, just to be sure.
Genie Scout ratings picture for Fast Striker
I tried to replicate your case and I simply couldn't.
I've changed attributes of two players to match the ones of your players:
Player 1
Player 2
And the results for their GenieScout ratings are as shown:
So I have frankly no idea why you have those results.
My only wild guess is that maybe beside the ratings we can set, Genie takes into account some other factors into 'final rating'. As you can see I literally copied all attributes of your players to my players. So they have in theory attributes that should be suitable for AMLR yet they have highest rating in their respective 'original' positions (CD for Testing 1 and DMC for Testing 2) and their ratings for those positions is MUCH higher than it should be simply by taking only the attributes.
A few have already asked about a larger list of attributes. I'm wondering how much effort it would be to do your tests/calculations for more attributes? I'm thinking this because the 8th attribute for some position is giving in the 30%s percent. I feel like if the 9th is 20% that is still impactful on the player's 'ability'.
You asked where to draw the line? If you'd be able to do the calculation and leave out anything under 10% that would be amazing!
I'm just a bit ocd with stuff like this so I'd love it if I had a more complete genie ratings file.
I'd be happy to create the genie rating file after and share it.
I answered similar question before in this topic.
Comment 1 ; Comment 2
Tl;dr I can do it and I did it before for 'general model' but it's pointless since after a few attributes the rest become basically a noise in the data.
Also in data analysis and especially in model development not always the more is better. Sometimes you use less features to 'generalise' model so it actually fits better.
Another thing is it was meant to use mostly inside the game as an indicator about core attributes. I don't expect anyone to compare players based on 50 attributes. The 6-8 is doable even for 'visual' comparison.
I didn't paid attention how is this spreadsheet made. I didn't make it so I don't know how it works.
Maybe it uses different coefficients since as far as I remember I updated them once (when I removed set pieces).
There is now a updated version of the Chinese Engine, as attached.
Is it possible for you to test this engine, and also make a GS rating file? That would be very appreciated.
If I'll have anytime soon I may take a look at it.
I've just finished another testing with different match engine. The data is already there I just don't have time to get through it - and I have some new ideas to check for more diverse results.
To put it into perspective on my mediocre PC it takes around 4hrs to simulate a single season for those test and I usually try to get around 10 seasons per match engine.
In test there is no indicator about which player plays which role. They are only distinguished by their natural positions - my great wish would be possibility to filter them out by 'most played position' and not just 'position they can play'. This induces some errors since for extreme example if a player can play naturally as a Striker and CB he'll appear in both categories. My best hope is that in the sheer mass of the players such outliers will be outweighed.
In GenieScout rating files there is no AMLR (only 'regular' Winger, no matter the position) so their rating is set for group 'Fast Striker'.
These meta tactics, against opponents such as MARCO ROSE, DIEGO SIMEONE, and STEFANO PIOLI, have a "loss rate" that is 10-20% higher than the "win rate", that is, they are overwhelmed
I know it's not always THAT simple but can we make a rule of thumb that many 'semi-decent' tactics (or just self made) will improve significantly just by turning on 'Get stuck in' and probably using for most of the team Player's Instruction 'Tackle Harder' - since most meta tactics use both of these features?
So if your tactic is not total garbage that doesn't make sens, just tick press more, get stuck in, tackle harder and you're ready to overperform.
@Orion
Also, can you explain this from the opening post
- MLR is Winger
- AMLR is Fast Striker
Are you saying if i want to select the best AMLR I should look at the Fast ST column on GS?
Yes, exactly. Because in GS you have only 1 Winger tab and as we could see in the test ML/R and AML/R use slightly different attributes so I wanted to somehow fit this into GS ratings file.
We learned that the ratings not purely depends on the attributes but it also greatly depends on other factors such the quality of the team and the tactic been used.
So if one striker gets better ratings than other striker then it doesn't necessary mean that he has better attributes, it could be due just him been in a better team or playing in such tactical approach that generates better ratings for the striker position.
I realise that. That's why I take my coefficients with a huge grain of salt. I understand that issue - players being played in suboptimal tactic or even suboptimal position for them. My assumption is that sheer amount of data I had overcome those outliers but frankly you can never be 100% sure.
My wish feature to overcome that obstacle or uncertainty would be possibility to filter out players not by their 'natural' position but 'most played position'. This way we could get rid of scenario where, let's say for the sake of argument, we have a player that is natural as CB and STC. He plays as STC, scores a lot of goals so gets high average rating. But in my model he also shows up in CB lists because I filter based on players position - because sadly I have no other possibility - and thus influences coefficients for attributes required for CBs that we know are not true since he doesn't actually plays that position.
Just for the record, the fm-arena test isn't a 'lab test' or something like that.
In our test we try to recreate the real game environment as much as possible, otherwise there would no point in doing the test at all because anyway all the findings would not be working in the real game environment.
I want to stress that the attributes of the players in our test league weren't taken pulled out of thin air.
For example, the attributes of the Wingers in our testing league are based on the average attributes of the 100 highest CA wingers in the game.
The attributes of the Full Backs in our testing league are based on the average attributes of the 100 highest CA Full Backs in the game.
The attributes of the Central Defenders in our testing league are based on the average attributes of the 100 highest CA Defenders in the game.
And so on...
Speaking other words, if we take the 100 highest CA wingers in the game and take the average attributes of them then we'll get the attributes of the wingers in our testing league.
As you can see the attributes of the players in our test league represent "perfectly" what we have at the top level in the real game.
You might ask why we do only focus on the top level and I can answer you that we don't bother with testing "lower league" environment because the difficulty level in such environment is extremely low, I mean that any low league can be dominate with a very average tactic and a very basic managerial skill.
FM only gets challenging at the top level, I'm speaking about such high reputation leagues like English Primer League, Spanish La Liga, Italia Serie A, Bundesliga, Champions League and so on so we focus our testing on that kind of environment where people get challenged the most in FM.
I hope this helps to clear the things.
Cheers.
Thanks for clarifying. My sincere apologies then. I thought that your test is somehow similar to what Zealand did. I thought your test leagues is all the same teams.
I'm sorry if what I said is not true. This comes from a fact that I never saw things behind your test so I had to make assumptions. It's also not clear - at leas in Attribute Test main post - how the testing league is set up. We only see the results - and in some previous editions I think there were screens from the league results.
It's also interesting point that you average attributes for 100 best players in a certain position. It would be interesting to see if those attributes at the start of default database are vastly different from the one from the save let's say 20 years into future where all of the real players are replaced with Newgens. Not only to see how Newgens are differently built compared to real life players (because we know for example that WBs/FBs are lacking high crossing attribute values) but also since going long term saves possibly changing this average attribute values.
I understand this puts the better players in real life withy higher ratings but is this then not inferior to the ratings file based off the meta attributes?
I̶ t̶h̶i̶n̶k̶ h̶e̶ m̶e̶a̶n̶t̶ t̶h̶a̶t̶ F̶M̶ A̶r̶e̶n̶a̶ i̶s̶ a̶ g̶r̶e̶a̶t̶ a̶t̶t̶r̶i̶b̶u̶t̶e̶ t̶e̶s̶t̶, b̶u̶t̶ i̶s̶ k̶i̶n̶d̶ o̶f̶ u̶n̶r̶e̶a̶l̶i̶s̶t̶i̶c̶ s̶c̶e̶n̶a̶r̶i̶o̶. Y̶o̶u̶ h̶a̶v̶e̶ w̶h̶o̶l̶e̶ l̶e̶a̶g̶u̶e̶ w̶i̶t̶h̶ l̶i̶t̶e̶r̶a̶l̶l̶y̶ t̶h̶e̶ s̶a̶m̶e̶ t̶e̶a̶m̶s̶ a̶n̶d̶ o̶n̶e̶ h̶a̶s̶ a̶l̶l̶ t̶h̶e̶ p̶l̶a̶y̶e̶r̶s̶ w̶i̶t̶h̶ 1̶ a̶t̶t̶r̶i̶b̶u̶t̶e̶ c̶h̶a̶n̶g̶e̶. T̶h̶a̶t̶'s̶ f̶a̶r̶ f̶r̶o̶m̶ 'r̶e̶a̶l̶ s̶c̶e̶n̶a̶r̶i̶o̶' w̶h̶e̶n̶, f̶o̶r̶ t̶h̶e̶ s̶a̶k̶e̶ o̶f̶ a̶r̶g̶u̶m̶e̶n̶t̶, y̶o̶u̶ u̶s̶u̶a̶l̶l̶y̶ h̶a̶v̶e̶ b̶i̶g̶ C̶B̶ a̶n̶d̶ s̶m̶a̶l̶l̶ s̶t̶r̶i̶k̶e̶r̶. I̶n̶ F̶M̶ A̶r̶e̶n̶a̶ c̶a̶s̶e̶ y̶o̶u̶ h̶a̶v̶e̶ f̶o̶r̶ e̶x̶a̶m̶p̶l̶e̶ t̶e̶a̶m̶ t̶h̶a̶t̶ h̶a̶s̶ a̶l̶l̶ p̶l̶a̶y̶e̶r̶s̶ t̶h̶a̶t̶ h̶a̶v̶e̶ a̶l̶l̶ 1̶0̶s̶ i̶n̶ a̶t̶t̶r̶i̶b̶u̶t̶e̶s̶ n̶o̶ m̶a̶t̶t̶e̶r̶ t̶h̶e̶ p̶o̶s̶i̶t̶i̶o̶n̶ a̶n̶d̶ t̶h̶e̶ o̶p̶p̶o̶s̶i̶n̶g̶ t̶e̶a̶m̶ t̶h̶a̶t̶ h̶a̶s̶ a̶l̶l̶ t̶h̶e̶ p̶l̶a̶y̶e̶r̶s̶ w̶i̶t̶h̶ a̶l̶l̶ t̶h̶e̶ 1̶0̶s̶ e̶x̶c̶e̶p̶t̶ o̶n̶e̶ t̶e̶s̶t̶i̶n̶g̶ a̶t̶t̶r̶i̶b̶u̶t̶e̶. H̶e̶r̶e̶ y̶o̶u̶ h̶a̶v̶e̶ d̶a̶t̶a̶ b̶a̶s̶e̶d̶ o̶n̶ p̶l̶a̶y̶e̶r̶s̶ f̶r̶o̶m̶ 'r̶e̶g̶u̶l̶a̶r̶' g̶a̶m̶e̶ s̶o̶ t̶h̶e̶y̶ a̶r̶e̶ f̶a̶r̶ m̶o̶r̶e̶ d̶i̶v̶e̶r̶s̶e̶. Y̶o̶u̶ c̶o̶u̶l̶d̶ s̶a̶y̶ t̶h̶a̶t̶ F̶M̶ A̶r̶e̶n̶a̶ i̶s̶ m̶o̶r̶e̶ l̶i̶k̶e̶ 'l̶a̶b̶ t̶e̶s̶t̶', s̶t̶r̶i̶c̶t̶ s̶c̶e̶n̶a̶r̶i̶o̶, w̶h̶i̶l̶e̶ t̶h̶i̶s̶ e̶x̶p̶e̶r̶i̶m̶e̶n̶t̶ i̶s̶ m̶o̶r̶e̶ l̶i̶k̶e̶ c̶h̶e̶c̶k̶i̶n̶g̶ o̶n̶ r̶e̶a̶l̶ l̶i̶f̶e̶ p̶o̶p̶u̶l̶a̶t̶i̶o̶n̶.
Explained with Zippo's comment below.
Damn. Why strike is not working?
Offensive: (1-7)
Defensive: (1-7)
so you could make your own 'exercise' that maximise improvement in all areas without much fatigue.
Nope. I just ran the model for 8 iterations for every position.
As said somewhere before - using all attributes is quite useless since a lot of those attributes will have much lower coefficients that don't bring much to the final result. You can see sample result for the FM23 model for the strikers using all features used here. As you can see around 6 features are quite important and then the rest is lower and lower and then the last ones are basically 'noise' in the data.
Who needs strikers in such scenario?
You mean for the team that plays 'hidden games' in nonplayable league or the one with no friendlies on default?
The issue with B teams is that they are mostly semipro which is kind of awful for development. Different countries do B teams differently though. For example in France a B team is part of the main club, sharing professional status. In most other countries B team is an affiliate.
I am ware of this feature that the team can play games that are no in the schedule. I'm talking here about a case where B team have no games at all.
As always thank you very much for providing crucial evidence!
Anyway, I understand I'm not the target audience of your work, and I thank you for taking the time to elaborate.
Cheers
Then you'd noticed for example the part with 1000 minutes required to be part of the training data set and combining league rank into players data.
The goal was to make rather universal model that will be a more of a guidance for the player what attributes look for than some absolute solution to players selection. I did it mostly for myself out of curiosity and since I already did the work I've decided to share it with the community so others can benefit from it. And then I've decided to update with it attributes for each position (because first model was for all outfield players) and then expanded it with custom match engines.
I'm currently not planning on doing another models dedicated for top leagues because I believe they won't be that different from general models.
ClaudeJ said: My main concern is that the average includes non-playing players, possibly those not even on the squad roster.
It definitely does not because:
1) players that don't play won't have average rating at all - so the target value
2) part of players filtering contains minimum of 1000 minutes played in the league in the season
ClaudeJ said: PS: ingame, each competition has a reputation value, regardless of its continent, and that could serve as a global relative ranking.
Each player has added a custom feature that represents their respective league position (that is based on the league reputation) in Europe. Every player attributes are 'transformed' to not use 'absolute' value of the attribute but it shows difference between players actual value and the average value for the league his playing in, for every single attribute considered.
ClaudeJ said: I understand that refining the dataset would require a lot of additional work, but I truly believe it would better serve its intended purpose by providing a clearer and more accurate representation of attribute importance at a competitive level.
Not everyone plays with top leagues - I do not. I never in FM era played with a team in top 5 leagues.
I'd also make a bet that among people who lurk on this forum there is higher share of players playing in lower leagues than in general population of FM players.
Standardisation should compensate for the 'relativeness' of attributes. Please read those topics mentioned before. It would answer a lot of your questions and concerns.