ZaZ
Poacher said: IMHO, the changes between Blue 2.0 and Blue 1.0 would make a 1% difference at the most :)

Maybe you are right. I am just saying that everyone that tested both prefer 2.0, since it gets better results. But it could very well perform worse in some test, for specific conditions. I would also say it's very hard to optimize a tactic when it's too close to the top, so you can't just minimize a 1% improvement as if it's something trivial to achieve.

P.S.: I don't understand why people are minimizing his results. They  don't change any of the results from fm-arena, as they are not comparable. They use different methodologies and are meant for different purposes.
Poacher said: @Mark, if you'd ask me then I wouldn't say there's a clear winner




ZaZ Blue 2.0 tactic: Avg. points per game = 1.916

Phoenix 3.0 tactic: Avg. points per game = 1.849

The difference in PPG is less than 3.5%

I guess you tested without eliminating RNG factors like in the fm-arena testing, I mean freezing the morale and conditions and so on. In the fm-arena testing Phoenix tactic has 2.114 PPG and Blue tactic has 2.092 PPG so in fm-arena testing the difference in PPG is less than 1.1% and I'm sure if you were eliminating RNG then you also had like a 1.1% difference in PPG and you really can't say that there's a clear winner when the difference is less than 1.1% :)


Blue 2.0 wasn't selected for testing in fm-arena. It doesn't appear in the table.

Also, the difference of 0.08 points per match is basically the difference between a 7.0 and a 6.8 tactic here, so it shouldn't be considered negligible.
Mark said: I dont disagree with your points about points per match being the most important stat @Nikko and @Milakus. This is why it is the main stat I have used in the summary results. However, I think winning the league means you had the best side that year in that league, so I think it is a supplementary indicator.

In terms of the variables and random factors, I think that the extent of the testing would reduce these somewhat. The higher the number of games played the lower the random impacts will be. For example. 1800 games testing should provide a much better picture than say 150 games.

I love FM Arena. The concept is brilliant and value to the FM community is huge. However I have often thought that by using teams with talent at the levels of EPL and Champions leagues, the testing doesnt always assist us lower league players. And I certainly like that there are differing views put forward on the forums.

Having said all that, I did this for fun and dont claim it to be definitive. Take what you want from what I have produced. I play the lowest tier in each save and will certainly be favoring ZaZ Blue 2.0 until the next major update from SI.


There are also some factors that normal tests from here or fm-base won't account, like pressure added by a title run, injury of key players leaving no options for positions, as well as exhaustion for tight schedule. Those factors induce player mistakes, which can show which tactics are more tolerant to errors. I believe it's a very realistic scenario of what people encounter in practice, instead of always having players in the best shape to minimize their variation. It's just normal for players to oscillate during a season, and the high number of tests make up for the unwanted randomness.

I don't think he claims his results to be absolute, it's just a different approach, which might lead to different results. He is just considering the tactics in less ideal conditions.
Milakus said: Only the points per match are important and the final standing means nothing because the standing  depends on the performance of the AI teams which can greatly vary and depends on many random factors so in one test some of AI managers can do very good and in other test they might do poor.

The comparison was made using points per match. The number of titles was just an interesting fact. I believe extra data is always useful.
Milakus said: Good work, Mark.

But when you look at your points per match table I don't think you can determine which tactic is a clear winner because there's like a 3.5% between the tactics and probably, if you remove all random factors like fm-arena does in its testing then you'll get less than a 1.5% difference between the tactic when you look at the points per watch and that's why all three tactics are rated 7.0 because a 1.5% difference is almost nothing  :)


He tested Blue 2.0, which is better than original Blue. That tactic didn't get selected for tests here.

@Mark , can you make a compilation of each teams average attributes, from team report? It would be interesting to know the first touch, determination and other attributes to find some correlation.
Mark said: OK, the results are in. Just under 1800 games for each tactic.

Points per match by league using top ranked and lowest ranked sides:



I think ZaZ Blue 2.0 is a clear winner for lower league management.

Points per match by club:



Weymouth are a bit of an anomaly. I feel inclined to take the challenge and try and win something with them on my next save using ZaZ Blue 2.0.

ZaZ Blue 2.0 Summary results and individual test results:



Phoenix v3.0 Summary results and individual test results:



Viola V1 Summary results and individual test results:



Probably Weymouth is much worse compared to other teams in the League, like they have stats of non-league competing with professionals. I will also try to play them since it seems to be hard mode.

About the results, first I want to thank you for the effort. It's always nice to have people testing tactics, but you brought it to another level with such a thoughtful report. The methodology was also sound, which makes me believe the results are as close as possible to what would happen to the average player. I am also happy to see Blue 2.0 winning, obviously, since it won't be tested here and has a high chance to not be tested on fm-base anytime soon. I kinda needed to know how it performs, because I always felt the better play style could be just because higher tempo is more enjoyable to watch.

Other than Weymouth, the worst position for Blue 2.0 was an 8th place. It also won che championship 18 out of 20 times, getting in promotion range 28 out of 40 times. For comparison, Phoenix won only 11 times and got in promotion range 26 times, very similar to Viola. Blue 1.0 would also fall in that range, probably, since the three have very similar performance.

Good job there!
Grimlock said: I ended up with using Blue 2.0 and the starting eleven below



My team won everything. Thank you. :)



You're welcome! Haaland must have scored like there is no tomorrow. He is really great to be retrained as SS.

P.S.: Just saw that he was the top scorer of all competitions. 63 goals in a year is not bad, it's more than Messi and CR at the peak.
Grimlock said: Thanks, pal. I'll give it a got.

I recommend using Blue 2.0, as the original is outdated. Please, share the results and tell us if you enjoyed the play style.
Grimlock said: Guys, how should I line up Borussia Dortmund for this tactic? Can anyone help?

Thanks.




Why not Reus? Because he is made of glass.
Liam said: Ironically not stressed at all, it's a relief, to be honest.

Also, you're right - I insinuated myself based on the tone of the comment that it was hostile towards Base and instinctively got protective. My bad!

Hope everyone is okay?


I guess I am the one to blame since I said Knap could be rigging his tests. Sorry everyone.
Luisinho said: You didn't say anything wrong mate, I can't understand those exaggerated reactions. Internet is going really crazy nowdays.

It's hard to understand what people mean without voice intonation and face expressions. Also, this whole Knap thing must have stressed the guy a lot these days. I hope things cool down so people can go back to treating this as just a hobby.
Liam said: They don't get any priority! Currently, our queue selects tactics totally from random due to the nature of working with two different website/platforms. In the future, it will test in the order of the uploads. Hope that clears that up. The only reason more of Knap's were tested, is because he had over 200 uploaded, out of a total of 700~.

Thank you for the hard work at the platform.
Egraam said: It's only natural, that if they pick tactics at random, and there are so many Knap tactics, that some of his tactics will be tested first - it's simple probability.

I use both sites, both sites use different methods of testing, and both sites are testing the tactics entirely for free, benefiting the FM community. No need to be unnecessarily hostile by throwing accusations.


I believe he just meant that his tactics get priority, which would be natural with an agreement. No need to escalate things. Both sites have done a great deal of work to the community and deserve respect. I don't think anyone doubt their methods of testing, this looks just like a misunderstanding.
Mark said: @ZaZ All the numbers are average points per game. The top ranked sides and low ranked sides were determined using FM Scout when the game was first saved. So low ranked sides were expected to be relegated and top ranked sides to be promoted. Non League sides are Vanarama leagues. Hope that helps

Those are awesome results, then. I suppose it will get more even when all tactics are tested against all sets of tests. I still expect Blue 2.0 to have an edge, though.
kvasir said: How did you get the percentage % view of physical shape? I hate the new symbol.

I second this question. This was by far the worst change of this version.

P.S.: After so many positive tests, I changed the main post to make sure people know 2.0 is better.
Go, Blue 2.0, go!

P.S.: Can you please explain what the stats mean? What are they and how to read it? Is higher or lower better in each of them? (P.S.2: I believe it's average points with top team and low team, but no idea what non league sides mean.)
healmuth said: freakin done with Knap and his shit this year. He's always been a multi-tactic guy but man his freaking 50 releases a day is a joke.

Worsr part is that because of his tactic spam, other people can't have their tactics tested. For example, I posted my Blue 2.0 at fm-base one week ago, but there is a queue with 100 tactics from knap before they even think about testing it.
Mark Tighter said: I don't think he rigs, he simply always uses Liverpool, the strongest team in the Premier League and probably the strongest team in the whole game. It takes a really awful tactic for not winning with Liverpool.

Maybe you are right, or maybe he tests with maximum condition, morale and tactical familiarity, as well as selecting the team manually. Anyway, it surprises me that none of his tactics was good in this patch.
Nothing against the topic starter, but I am getting a bit tired of so many"Knap cool name 300points with Liverpool all cups". I mean, the guy has the same results for all possible formations, in all variations. I am starting to think he rigs the results somehow.
114 points in FM? Seems legit.