In contrast to our Benchmark 17 days ago, these games:
- Were played against significantly better human players
- Used hero lineups provided by a third party rather than by Five drafting against humans
- Removed our last major restriction from what most pros consider "Real Dota" gameplay.
Remarkably, the games were exciting and close — in contrast, all three Benchmark games were very one-sided — showing that even though Five teaches itself Dota from scratch, its playstyle results in incredible gameplay versus the best professionals. Winning is good, but losing shows the amazing skill of top professionals, and helps us compare Five’s play to the best of the best.
We're incredibly grateful to everyone in the Dota community for helping to create such a great training ground for AI progress: from motivating Valve to create and evolve the incredibly complex game, to supporting the analysts and professionals who can help us measure our progress, to the excitement we've seen from so many viewers which makes the project so much more fun to work on.
The purpose of the games were to showcase the capabilities of Five against the world's best humans, playing games of "Real Dota". Going into The International, we weren't sure exactly who we would get to play, as it depended on the availability of people willing to play us on the mainstage. We were grateful to play against teams far stronger than the one at the Benchmark.
Five played its first match on Wednesday against paiN Gaming, one of the top 18 Dota 2 teams in the world and were eliminated from The International earlier on in the tournament. PaiN players have won an average of $350,000 in career tournament earnings. The match lasted around 51 minutes (games usually last 45 minutes) and after strong start for the humans, Five regained some ground in the mid-game, before succumbing to various high-level strategic pushes by human players. On Thursday, we played our second game against a team of Chinese superstar players, three of whom had played on a competitive team together. After some exciting back-and-forth teamfights, Five lost after 45 minutes. The average tournament earnings for each of these players were about $1 million.
The Benchmark games contained a very impactful restriction which we have now removed: each hero was given its own invulnerable courier (a unit which delivers items to your hero) rather than having a single mortal team courier.
The extra couriers led Five to develop its signature high-pressure playstyle, since the couriers constantly delivered regeneration items, allowing Five's heroes to constantly attack towards the enemy's base. During a normal Dota game, heroes at low health would instead have to abandon the attack to heal up. Many observers felt the extra couriers made the games feel like they were watching a game unlike "Real Dota".
We began training with single courier six days ago (the courier itself, like its predecessors, is scripted). While we expected transitioning to single courier would temporarily decrease Five's performance, community feedback made it clear that single courier gameplay would be much more exciting.
We don't believe that the courier change was responsible for the losses. We think we need more training, bugfixes, and to remove the last pieces of scripted logic in our model.
As we said in Wednesday's panel, we are looking forward to pushing Five to the next level. These games have set a new high watermark for human vs AI games in Dota, and give us a lot to aspire to. But Five isn't just about Dota — it's about building AI technologies in a safe sandbox which will help us build advanced systems in the future. If you want to help us build these systems and ensure they are safe and will benefit all of humanity, then consider joining OpenAI.