shyamsn97/mario-gpt
Fork: 101 Star: 1113 (更新于 2024-11-01 07:41:08)
license: MIT
Language: Python .
[Neurips 2023] Generating Mario Levels with GPT2. Code for the paper "MarioGPT: Open-Ended Text2Level Generation through Large Language Models" https://arxiv.org/abs/2302.05981
最后发布版本: 0.1.3 ( 2023-02-22 03:32:36)
MarioGPT: Open-Ended Text2Level Generation through Large Language Models
Playing Generated Level | Generated Level |
---|---|
How does it work?
Architecture | Example Prompt Generations |
---|---|
MarioGPT is a finetuned GPT2 model (specifically, distilgpt2), that is trained on a subset Super Mario Bros and Super Mario Bros: The Lost Levels levels, provided by The Video Game Level Corpus. MarioGPT is able to generate levels, guided by a simple text prompt. This generation is not perfect, but we believe this is a great first step more controllable and diverse level / environment generation. Forward generation:
Requirements
- python3.8+
Installation
from pypi
pip install mario-gpt
or from source
git clone git@github.com:shyamsn97/mario-gpt.git
python setup.py install
Generating Levels
Since our models are built off of the amazing transformers library, we host our model in https://huggingface.co/shyamsn97/Mario-GPT2-700-context-length
This code snippet is the minimal code you need to generate a mario level!
from mario_gpt import MarioLM, SampleOutput
# pretrained_model = shyamsn97/Mario-GPT2-700-context-length
mario_lm = MarioLM()
# use cuda to speed stuff up
# import torch
# device = torch.device('cuda')
# mario_lm = mario_lm.to(device)
prompts = ["many pipes, many enemies, some blocks, high elevation"]
# generate level of size 1400, pump temperature up to ~2.4 for more stochastic but playable levels
generated_level = mario_lm.sample(
prompts=prompts,
num_steps=1400,
temperature=2.0,
use_tqdm=True
)
# show string list
generated_level.level
# show PIL image
generated_level.img
# save image
generated_level.img.save("generated_level.png")
# save text level to file
generated_level.save("generated_level.txt")
# play in interactive
generated_level.play()
# run Astar agent
generated_level.run_astar()
# Continue generation
generated_level_continued = mario_lm.sample(
seed=generated_level,
prompts=prompts,
num_steps=1400,
temperature=2.0,
use_tqdm=True
)
# load from text file
loaded_level = SampleOutput.load("generated_level.txt")
# play from loaded (should be the same level that we generated)
loaded_level.play()
...
Training
The code to train MarioGPT is pretty simple and straightforward, the training class is located here, with a small example notebook
import torch
from mario_gpt import MarioDataset, MarioLM, TrainingConfig, MarioGPTTrainer
# create basic gpt model
BASE = "distilgpt2"
mario_lm = MarioLM(lm_path=BASE, tokenizer_path=BASE)
# create dataset
dataset = MarioDataset(mario_lm.tokenizer)
# create training config and trainer
config = TrainingConfig(save_iteration=10)
trainer = MarioGPTTrainer(mario_lm, dataset, config=config)
# train for 100 iterations!
trainer.train(100, batch_size=1)
See notebook for a more in depth tutorial to generate levels
Interacting with Levels
Right now there are two ways to interact with generated levels:
- Huggingface demo -- Thanks to the amazing work by multimodalart, you can generate and play levels interactively in the browser! In addition, gpus are provided so you don't have to own one yourself.
- Using the play and astar methods. These require you to have java installed on your computer (Java 8+ tested). For interactive, use the
play()
method and for astar use therun_astar
method. Example:
from mario_gpt import MarioLM
mario_lm = MarioLM()
prompts = ["many pipes, many enemies, some blocks, high elevation"]
generated_level = mario_lm.sample(
prompts=prompts,
num_steps=1400,
temperature=2.0,
use_tqdm=True
)
# play in interactive
generated_level.play()
# run Astar agent
generated_level.run_astar()
Future Plans
Here's a list of some stuff that will be added to the codebase!
- Basic inference code
- Add MarioBert Model
- Add Interactive simulator
- Training code from paper
- Inpainting functionality from paper
- Open-ended level generation code
- Different generation methods (eg. constrained beam search, etc.)
Authors
Shyam Sudhakaran shyamsnair@protonmail.com, https://github.com/shyamsn97, https://shyamsn97.github.io/
Miguel González-Duque migd@itu.dk, https://github.com/miguelgondu
Claire Glanois clgl@itu.dk, https://github.com/claireaoi
Matthias Freiberger matfr@itu.dk, https://github.com/matfrei
Elias Najarro enaj@itu.dk, https://github.com/enajx
Sebastian Risi sebr@itu.dk, https://github.com/sebastianrisi, https://sebastianrisi.com/
Citation
If you use the code for academic or commecial use, please cite the associated paper:
@misc{https://doi.org/10.48550/arxiv.2302.05981,
doi = {10.48550/ARXIV.2302.05981},
url = {https://arxiv.org/abs/2302.05981},
author = {Sudhakaran, Shyam and González-Duque, Miguel and Glanois, Claire and Freiberger, Matthias and Najarro, Elias and Risi, Sebastian},
keywords = {Artificial Intelligence (cs.AI), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {MarioGPT: Open-Ended Text2Level Generation through Large Language Models},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
最近版本更新:(数据更新于 2024-10-06 11:35:50)
2023-02-22 03:32:36 0.1.3
2023-02-18 16:21:09 0.1.2
2023-02-16 11:14:48 0.1.1
2023-02-14 16:05:24 0.1.0
shyamsn97/mario-gpt同语言 Python最近更新仓库
2024-11-06 03:34:16 home-assistant/core
2024-11-05 16:16:26 Guovin/TV
2024-11-05 15:03:24 Cinnamon/kotaemon
2024-11-04 23:11:11 DS4SD/docling
2024-11-04 10:56:18 open-compass/opencompass
2024-11-04 08:51:21 yt-dlp/yt-dlp