bot bowl V

more info

Title: Bot Bowl V
Organizers: Mattias Bermell, Modl.ai
Niels Justesen, Ph.D., Modl.ai
Website: https://github.com/njustesen/botbowl
Description + Evaluation Criteria:
Bot Bowl is an AI competition using the Bot Bowl framework (previously known as FFAI). The Bot Bowl framework simulates the board game Blood Bowl by Games Workshop and offers APIs for scripted bots, search-based, and ML algorithms in Python. It’s also possible to use another programming language while there are no existing APIs for that. Blood Bowl is a major challenge for artificial agents due to its complexity and lack of rewards. This also means that we don’t have any basic baseline agents that are able to score points in this game! We do, however, provide tutorials on how to develop scripted, search-based, or ML bots.
The competition will have one track using the traditional board size of 26×15
squares with 11 players on each side. We will, like last year, restrict the game to
only feature a prefixed Human roster (i.e. no Orcs, Elves, etc.). The format will be a round-robin tournament where each bot plays 10 matches against each other and the tournament winner will be determined based on the following point system: 3 points for winning, 1 point for a draw, 0 points for a loss. In case of a tie, the bot with the highest number of touchdowns, and then inflicted casualties would win. In the future, when state of the art bots reliably beat humans in this setting, we plan to introduce more races and player abilities to the competition thus increasing the branching factor and introducing more of the long-term strategic elements that the game is known for.
Bot Bowl has been run four times in the past with an average of five bots
participating in each tournament. Bot Bowl III, saw the rise of a competitive
machine learning bot that used imitation and reinforcement learning. Last yearʼs competition, Bot Bowl IV, saw a mix of AI techniques including MCTS,
scripts and neural networks. In the end it was a purely scripted bot that won the competition. This year we plan to introduce an even faster forward model to the framework hopefully resulting in stronger search based bots. We are also setting up a master thesis proposal to investigate how imitation
learning from human replays can be used.
Tracks: Single track with traditional board size of 26×15 squares with 11 players on each side
Video: in progress

tales of tribute AI competition

more info

Title: Tales of Tribute AI Competition
Organizers:
Jakub Kowalski, University of Wrocław
Dominik Budzki, University of Wrocław
Damian Kowalik, University of Wrocław
Katarzyna Polak, University of Wrocław
Radosław Miernik, University of Wrocław
Website: https://github.com/ScriptsOfTribute
Description + Evaluation Criteria:
The deckbuilding card game Tales of Tribute is a special activity recently added to the massively popular multiplayer online role-playing video game The Elder Scrolls Online. Although the game remains small by CCG standards (about 120 cards and only a few keywords), it is interesting and challenging for humans, with a significant potential for being a good AI testbed. 
The players start with the same base cards and build their decks during the game by buying cards from a shared resource pool called the Tavern (a concept similar to Dominion). Tales of Tributes encourages strategic long-term planning – to ensure the deck we build contains enough strong cards and is not clustered with weak ones. In the meantime, because of a huge randomness factor influencing, e.g., which cards occur in the Tavern, it is usually hard to stick with a pre-made plan – so it also requires considerable adaptiveness. 
Tales of Tribute AI Competition aims to fill the void after the Hearthstone AI Competition while being significantly more challenging than a toy problem covered by Legends of Code and Magic and the Strategy Card Game AI Competition. 
The competition will be run using ScriptsOfTribute, an open reimplementation of the original game in the .Net framework designed especially for this event. It features the game manager that allows running AI agents implemented as C# classes against each other and a graphical user interface that support human vs. AI games. 
The competition’s winners will be decided upon the global winrate in all vs. all tournament, conducted on a large number of mirror matches using the same seeds. Deadline (August 7th).
Tracks: Single track
Video: https://youtu.be/3FxBlZ40l6o

the 2nd DareFightingICE competition

more info

Title: The 2nd DareFightingICE Competition
Organizers:
Chollakorn Nimpattanavong, Graduate School of Information Science and Engineering, Ritsumeikan University 
Ibrahim Khan, Graduate School of Information Science and Engineering, Ritsumeikan University 
Van Thai Nguyen, Graduate School of Information Science and Engineering,Ritsumeikan University 
Junzhang Chen, College of Information Science and Engineering, Ritsumeikan University 
Liyong Tao, College of Information Science and Engineering, Ritsumeikan University 
Hidetoshi Gaya, College of Information Science and Engineering, Ritsumeikan University 
Ruck Thawonmas, College of Information Science and Engineering, Ritsumeikan University
Website: https://tinyurl.com/DareFightingICE
Description + Evaluation Criteria:
Are you aware of any sound design (a set of sound effects combined with the source code that implements their timing-control algorithm) in video games that considers visually impaired players? Enhanced with a better sound design than its predecessor FightingICE, DareFightingICE is a fighting game platform for creating a sound design for such players.  As in the first competition at CoG 2022, this year, there are also two tracks. The first track is an AI competition, and the second is a sound-design competition.
Submissions — of an AI capable of operating with only audio input information or/and a sound design for visually impaired players — are welcome.
The AI track utilizes this year’s default sound design of DareFightingICE. There are two leagues: Standard and Speedrunning. Standard League considers the winner of a round as the one with the hit point (HP) above zero when its opponent’s HP has reached zero. In Speedrunning League, the league winner is the AI with the shortest average time to beat our sample MCTS AI that has access to delayed game states and a sample deep-learning AI whose input is only audio data. This track’s winner is decided, considering both leagues’ results based on the 2018 Formula-1 scoring system.
In the sound design track, the winning sound design is the one that has not only the highest sum of the scores from blindfolded human test players (scores here include both relative game scores compared when playing without being blindfolded and subjective scores assessing sound aesthetic) but also the highest performance from the sample deep-learning AI fighting against the sample MCTS AI when the former is trained using the sound design of interest.
The winning AI — if trainable — and the winning sound design will be used in the sound design track and the AI track in the next competition, respectively.
Submission deadlines
Midterm deadline: May 24, 2023 (23:59 JST)
Final deadline: July 29, 2023 (23:59 JST)(no extension!!)
Tracks:
(1) AI Track: This track seeks submissions for the strongest blind fighting game AI.
(2) Sound Design Track: This track seeks submissions for the best fighting game sound design.
Video: https://youtu.be/IojUrlXibvk

the Dota 2 5v5 AI competition

more info

TitleThe Dota 2 5v5 AI Competition 
Organizers: 
José Font, Department of Computer Science and Media Technology, Malmö University (MAU) 
Alberto Álvarez, Department of Computer Science and Media Technology, Malmö University (MAU) 
Website: https://games.mau.se/research/the-dota2-5v5-ai-competition/ 
Description: 
The Dota 2 5v5 AI Competition challenges participants to code a bot that competes against (and wins!) other player bots in standard Dota 2 matches. The competition runs on the original Dota 2 game, thanks to the Dota 2 5v5 Framework, that lets you develop, deploy, and run your own python program that controls the 5 heroes in any Dota 2 team: Radiant or Dire. Your bot will face other participantsʼ bots in standard 5v5 matches. You can freely choose 5 among all the available heroes for your team. Depending on the number of entries, we would arrange either an elimination or a round-robin tournament. The winner will be the bot with most wins and, in case of tie, the one that beats its opponents faster. The framework saves the time elapsed from the match start to the Ancientʼs destruction event, which determines the end of a match. 
Tracks: One 
Video:
There is a short video (with subtitles) and detailed instructions in the above competition website, as well as results from 2021. 

6th annual GDMC - AI settlement generation competition in minecraft

more info

Title: 6th annual GDMC – AI Settlement Generation Competition in Minecraft 
Organizers:
Christoph Salge, University of Hertfordshire, UK
Michael Cerny Green, NYU, US Rodrigo Canaan, Cal Poly State University, US
Christian Guckelsberger, Aalto University, Finland & QMUL, UK  Jean-Baptiste Hervé, University of Hertfordshire, UK
Julian Togelius, NYU, US 
Website: http://gendesignmc.engineering.nyu.edu/
Wiki: https://gendesignmc.wikidot.com/start
Social Media:
Discord: http://discord.gg/ueaxuXj Twitter: @gendesignMC 
Description: 
The GDMC competition is about writing an algorithm that can produce an “interestingʼʼ settlement for a previously unseen Minecraft map. It was designed to provide an AI competition focussed on computational creativity, co-creativity and adaptive, as well as holistic PCG. Competitors submit their algorithm, in one of two formats (both available via GitHub, see webpage), and the organizers then apply the algorithm to three, previously unseen Minecraft maps. The resulting maps and settlements are then sent out to a panel of expert and public judges, who score each algorithm in four categories: adaptivity, functionality, evocative narrative and aesthetics. 
This is the 6th iteration for the competition. We had 11, 21, 11, 7 and 4 submissions in previous years. Last yearʼs winner was the team from Leiden University. We switched to two new frameworks this year, one being developed by our active community and the other one being a more modern map editor for Minecraft. We have an active and helpful community, with over 300 discord members, and GDMC being used in classes in at least 5 universities internationally. We have also been covered in traditional media, such as MIT Tech review1, several online videos and podcasts (see videos), and have been invited to the Microsoft Research Summit on Game AI. Several papers have been published both about the competition, and by our participants about their entries. The are numerous master and bachelor theses written in this framework now. We run one main competition, with an optional bonus challenge for chronicle generation, addressing the challenge of grounded computational storytelling. 
https://www.technologyreview.com/2020/09/22/1008675/ai-planners-minecraft urban-design-healthier-happier-cities/
Video: 
There are about 25 introductory, framework-tutorial, showcase and media coverage videos about GDMC at this point. Here is a small selection:
General Introductory Video: https://youtu.be/opvVnpyiMmA
Framework introduction, by community member D. Churchill: https://youtu.be/e1ydZA4qfSs
Last yearʼs winner presentation: https://youtu.be/uYUIZUGPNX8
Coverage in AI and Game YouTube show:
https://youtu.be/_hP3RPFfSAI 
Bonus Information: 
Our website contains the following information, for the public:
  • An online form to submit the entries. 
  • Detailed rules for the competition, including submission instructions. • The results of the previous three years, including code, generated settlement, competition maps and scores. 
  • A detailed explanation of the scoring criteria, including the actual instructions given to our judges. 
  • A list of examples of good Minecraft settlements. 
  • Links to our other communication platforms: Discord, Twitter and our Wiki. • Links to a code repository containing example code for both submission options: 
  • Submission of an Amulet Script – a Minecraft editor that allows for custom filters 
  • Submission based on building Forge based http framework, that allows for real-time interaction with the running Minecraft world. The framework opens an http connection and there are Python and C example implementations to get started. 
  • Links to related publications. 
Timelines for the 6th round of GDMC, 2023: 
  • February 2023: Announcement of the new Round. 
  • 15 of June, 2023: Submission Deadline. 
  • 15 of August, 2023: Deadline for Evaluation by Judges. 
  • August, 2021: Announcement of Results at COG 2022 and online. 

geometry friends collaborative game AI competition

more info

Title: Geometry Friends Collaborative Game AI Competition 
Organizers: 
Rui Prada, Francisco S. Melo, Inês Batina e Inês Lobo INESC-ID and Instituto Superior Técnico, Universidade de Lisboa 
Website: https://geometryfriends.gaips.inesc-id.pt/ 
https://gaips.inesc-id.pt/geometryfriends/ 
Description: 
The goal of the competition is to build AI agents for a 2-player collaborative physics-based puzzle platformer game (Geometry Friends). The agents control, each, a different character (circle or rectangle) with distinct characteristics. Their goal is to collaborate in order to collect a set of diamonds in a set of levels as fast as possible. The game presents problems of combined task and motion planning and promotes collaboration at different levels. Participants can tackle cooperative levels with the full complexity of the problem or single- player levels for dealing with task and motion planning without the complexity of collaboration. 
The winner is the submission that achieves a higher overall score after running 10 different levels. The score of each level depends on the performance of the task regarding the number of collectibles caught and the time the agents take to solve the level. 
Tracks: 
The competition has 3 tracks: Cooperative, Single Player Circle, Single Player Rectangle 
Video: 
A video is provided in the guides section of the website. 
https://geometryfriends.gaips.inesc-id.pt/guides/ 

IEEE StarCraft AI competition

more info

Title: IEEE StarCraft AI Competition 
Organizers: 
Jaeyoung Moon 
Isaac Han 
KyungJoong Kim 

Website: https://cilab.gist.ac.kr/sc_competition 
Description + Evaluation Criteria: 
The IEEE CoG StarCraft competitions have made remarkable progress in advancing the creation and evolution of new StarCraft bots. Participants have employed various strategies to enhance the bots, leading to the enrichment of game AI and methodologies. Furthermore, recent developments in game AI, particularly through the application of deep learning and reinforcement learning, have also contributed to the advancement of StarCraft AIs. Nevertheless, developing AI for the game remains a formidable challenge, given the need to effectively manage a vast number of units and buildings while considering resource management and high-level tactics. The competition’s primary goal is to stimulate the growth of real-time strategy (RTS) game AI and overcome complex issues such as uncertainty, unit management, and the game’s real-time nature. 
The winner of the competition will be determined through a round-robin tournament, with as many rounds as possible conducted before the deadline for the announcement of the winners. If it is estimated that there is insufficient time to complete another round or insufficient time to prepare before announcing the winners, the tournament will be concluded at the end of a full round. 
Tracks: Single track: 1 vs. 1 full game 
Video: https://youtu.be/6tr-RDd8L3Q

microRTS Competition

more info

Title: MicroRTS Competition 
Organizers: 
Rubens Moraes and Levi Lelis 
Website: https://sites.google.com/site/micrortsaicompetition 
Description: 
Several AI competitions organized around RTS games have been organized in the past (such as the ORTS competitions, and the StarCraft AI competitions: AIIDE, CIG, SSCAI), which has spurred a new wave of research into RTS AI. However, as it has been reported numerous times, developing bots for RTS games such as StarCraft involves a large amount of engineering, which often relegates the research aspects of the competition to a second plane. The microRTS competition has been created to motivate research in the basic research questions underlying the development of AI for RTS games while minimizing the amount of engineering required to participate. Also, a key difference with respect to the StarCraft competition is that the AIs have access to a “forward model” (i.e., a simulator), with which they can simulate the effect of actions or plans, thus allowing for planning and game-tree search techniques to be developed easily. Although we acknowledge that planning in domains for which the agent does not have a forward model is an important problem, this is left out of this competition, in order to focus on other core RTS problems. The winner of the competition will be determined with a round robin tournament where each agent plays all the other agents (and a few agents the organizers will add to the pool) in a set of maps. A victory counts as 1.0, a draw as 0.5, and a defeat as 0.0. The agent with the highest score in the round robin tournament will be determined as the winner of the competition. 
Track: Classic, which focuses on the problem of large state spaces and branching factors by making the game fully observable. The game will be configured to be deterministic.
Video: Videos of competition are available in the website: https://sites.google.com/site/micrortsaicompetition/getting-started?authuser=0

CoG-2023 multi-agent google research football competition

more info

Title: CoG-2023 Multi-Agent Google Research Football Competition 
Organizers: 
Haifeng Zhang, Institute of Automation, Chinese Academy of Sciences, China
Yahui Cui, Institute of Automation, Chinese Academy of Sciences
China Yan Song, Institute of Automation, Chinese Academy of Sciences, China 
Website: http://www.jidiai.cn/cog_2022/ 
Description: 
The competition will be named CoG-2023 Multi-Agent Google Research Football Competition as it plans to use the multi-agent scenarios of the Google Research Football (GRF) simulators [1] as the testbed. The GRF simulator bears great similarity to well-known football games like FIFA and REAL Football, vividly modelling the dynamics of the ball and player movement as well as their interaction such as passing, shooting, etc. The competition focuses on the full game scenario built on GRF simulator where two teams of players compete for goals on the pitch (as shown in Fig. 1). The competition will be held on Jidi platform which offers online evaluation of submitted policies on various simulated environments and holds Kaggle-like customized competitions. Regarding the scale of the game, the GRF competition has two tracks as two separate contests, one easy track using the 5vs5 multi agent full-game scenario and one hard track using the 11vs11 multi-agent full game scenario. 
On these scenarios, the duration of the game is 3000 discrete time-steps corresponding to 90 minutes in a real football match. The observation for each player contains information of both the teammates and the opponents, as well as the state of the ball. Based on the provided information, each individual player can either move or make specific actions such as ball-passing or ball-shooting. The goal of each team is to control all of its players (5 or 11), organize team offense and defense and win the game by scoring more than the other team. Expectedly, a strong participant is required to demonstrate sophisticated football skills and counter various opponentsʼ tactics in such complex competitive tasks. 
Previous GRF Competitions: 
GRF has already been chosen previously as a testbed for AI competitions. For instance, the Single-Agent RL (SARL) Google Research Football 2020 Kaggle Competition [2] was held by the famous Manchester City F.C., attracting over 1,000 team participation on the single-agent 11v11 full-game scenario. In this task, one side of the team only needs to control the single designated player and all of its teammates are controlled by the built-in AI. In year 2022, Jidi held the MARL CoG 2022 GRF tracks [3] on multi-agent RL (MARL) settings (5v5 and 11v11 full-game scenarios) with an increase in the number of controllable players.This was more difficult than the single-agent settings and had attracted over 100 team participation (see Fig.2). 
Jidi plans to continue the competition on GRF in year 2023 after discovering interesting tactics from MARL CoG 2022 GRF tracks. On the 5v5 track particularly, many top participants have learned a high-pass glitch existing in the goal-keeperʼs built-in logic and taken advantage of such loopholes to form an extremely strong tactics. On the 11v11 track, some collaborative behavior between players have emerged and a game clip can be found in [4]. To further encourage multi-agent learning research, we will hold two tracks on 5v5 and 11v11 full-game scenarios respectively: 
  1. 5vs5 track: http://www.jidiai.cn/env_detail?envid=71
  2. 11vs11 track: http://www.jidiai.cn/env_detail?envid=34;
Evaluation Details: 
Each team of the competition consists of 1-3 members and each track is scheduled for two warm-up rounds and two main rounds. Only the results of the main rounds are added up to give the final scores and the team with the highest score wins the track. Both the warm-up and the main rounds adopt the Swiss Round double competition system [5] and the ranking of each team is determined through (4logN⌉+1) rounds of matches (N is the number of teams). The double competition system means that the two submissions will be evaluated under the same environment random seeds for two rounds and switching sides when each round finishes. The initial score for each team is at 1000 and are updated through ELO algorithm. 
Tracks: To further encourage multi-agent learning research, we will hold two tracks on 5v5 and 11v11 full-game scenarios respectively: 
  1. 5vs5 track: http://www.jidiai.cn/env_detail?envid=71
  2. 11vs11 track: http://www.jidiai.cn/env_detail?envid=34;

VGC AI competition

more info

Title: VGC AI Competition 
Organizers: Simão Reis 
Website: https://gitlab.com/DracoStriker/pokemon-vgc-engine/-/wikis/home 
Description: Can AI beat other opponents in Pokémon battle games?
The VGC AI Competition aims to emulate the Esports scenario of human video game championships of Pokémon with AI agents, including the game balance 
aspect. Player agents must master Pokémon battling and team building, with only information about past team choices. Balancing agents must adapt the Pokémon attributes to motivate more variety of choices by pre-set player agents. 
Tentative submission deadline: 30th June 2023
Tracks: The competition is organized in three tracks: 
  1. In the Battle Track, battle agents must be able to pilot any given team. The winner is determined by the outcome of battles structured as a tree championship. 
  2. The Championship Track focuses on the team-building aspect of the VGCs. Player agents (battle+team-building duo) must be able to adapt to the meta by assembling the best teams to counter the teams they predict they will be facing. The winner is determined by the final ELO ranking after many epochs of competition. 
  3. In the Meta-Game Balance Track, a balancing agent has to maintain a Pokémon roster. Entries will be evaluated in parallel by accumulated points over temporal checkpoints, accumulating the output of a fitness function that measures the diversity of the meta by a pre-set team-building agents. 

the 1st ChatGPT4PCG competition: character-like level generation for science birds competition

more info

Title: The 1st ChatGPT4PCG Competition: Character-like Level Generation for Science Birds
Organizers:
1. Pittawat Taveekitworachai, Graduate School of Information Science and Engineering, Ritsumeikan University
2. Febri Abdullah, Graduate School of Information Science and Engineering, Ritsumeikan University
3. Mury F. Dewantoro, Graduate School of Information Science and Engineering, Ritsumeikan University
4. Ruck Thawonmas, College of Information Science and Engineering, Ritsumeikan University
5. Julian Togelius, NYU Tandon School of Engineering, New York University
6. Jochen Renz, School of Computing, The Australian National University
Website:
https://chatgpt4pcg.github.io/
Description of the Competition:
The 1st ChatGPT4PCG Competition challenges participants to utilize the state-of-the-art natural language processing tool, ChatGPT, to generate visually appealing and structurally sound levels for Science Birds, an Angry Birds clone created for research purposes.

As a participant, your goal is to create a prompt that instructs ChatGPT to generate a level in Science Birds that resembles an English capital letter while ensuring that the level is stable and able to withstand gravity. You are encouraged to use various prompt engineering techniques to develop the most effective prompt possible.

To participate, you must submit your prompt according to our guidelines. We will then generate a number of samples, each of which will undergo rigorous testing for stability and similarity. Stability will be evaluated by loading the level in Science Birds and ensuring that there is no movement for a duration of 10 seconds. The similarity of each generated level to its corresponding English character will be determined using an open-source Vision Transformer (ViT)-based classifier model. The stability testing system and the instructions to use the classifier model, as well as all the relevant tools, will be provided.

This competition offers a unique opportunity for the best prompt engineers from around the world to showcase their creativity and skills. Join us in this exciting challenge to push the boundaries of prompt engineering and procedural content generation!
Track: NA
Video: https://youtu.be/9AJhqIkDbxs
Contact Email address: chatgpt4pcg@gmail.com
Submission deadline:
Midterm: 19 May 2023 (23:59 JST)
Final: 21 July 2023 (23:59 JST)
Other Information:
Keywords: ChatGPT, prompt engineering, procedural content generation, level generation, conversational agent, large language model, Angry Birds, Science Birds
Programming languages: N/A. However, having general programming knowledge can be useful.
Complexity: Low-Medium
Competitive: New Competition
Barrier of entry: Low

© 2023 All rights reserved​