Download PDFOpen PDF in browserCurrent version

Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris

EasyChair Preprint 13437, version 1

Versions: 12history
6 pagesDate: May 26, 2024

Abstract

The game of Tetris is an open challenge in machine learning and especially Reinforcement Learning (RL). Despite its popularity, contemporary environments for the game lack key qualities, such as a clear documentation, an up-to-date codebase or game related features. This work introduces Tetris Gymnasium, a modern RL environment built with Gymnasium, that aims to address these problems by being modular, understandable and adjustable. To evaluate Tetris Gymnasium on these qualities, a Deep Q Learning agent was trained and compared to a baseline environment, and it was found that it fulfills all requirements of a feature-complete RL environment while being adjustable to many different requirements. The source-code and documentation is available at on GitHub and can be used for free under the MIT license (https://github.com/Max-We/Tetris-Gymnasium).

Keyphrases: Gymnasium, Reinforcement Learning, Software Engineering, Tetris, library

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:13437,
  author    = {Maximilian Weichart and Philipp Hartl},
  title     = {Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris},
  howpublished = {EasyChair Preprint 13437},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browserCurrent version