Skip to content

Commit

Permalink
Merge branch 'dev'
Browse files Browse the repository at this point in the history
  • Loading branch information
Eduard-Prokhorikhin committed May 4, 2024
2 parents cb58e4e + 509111a commit e8a549d
Show file tree
Hide file tree
Showing 10 changed files with 137 additions and 69 deletions.
73 changes: 64 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,74 @@
# Tetris AI

<div align="center">

![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/CogitoNTNU/TetrisAI/ci.yml)
![GitHub top language](https://img.shields.io/github/languages/top/CogitoNTNU/TetrisAI)
![GitHub language count](https://img.shields.io/github/languages/count/CogitoNTNU/TetrisAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Project Version](https://img.shields.io/badge/version-1.0.0-blue)](https://img.shields.io/badge/version-1.0.0-blue)

![logo](docs/img/Logo.webp)

</div>

## Description

<!-- TODO -->
This project is our attempt at making an AI that can play Tetris. First of all we made the environment (the game itself) and then we started working on multiple AI's that can play the game. The AI's are based on different algorithms and strategies. We have implmented:

- Random agent
- Heuristic agent with set weights
- Genetic algorithm to find the best weights for the heuristic agent

The game is playable/viable both in the terminal and in a GUI. The GUI is made with Pygame.

## How to run and install

- Have python installed (tested with 3.10+ but should work on older versions as well)
- Have pip installed
- Clone the repository
- Set up a virtual environment (optional but recommended) see [here](docs/guide/venv.md) for a guide
- Install the required packages with pip

```bash
pip install -r requirements.txt
```

- Done! You are now ready to run the game

## Usage

To play the game yourself, run the following command:

```bash
python main.py play
```

To let the agent play the game, run the following command:

```bash
python main.py agent <agent>
```

where `<agent>` is the agent you want to use. The available agents are: `random`, `heuristic`, `genetic`

## Setup
### Prerequisites
- Ensure that git is installed on your machine. [Download Git](https://git-scm.com/downloads)
- Ensure Python 3.12 or newer is installed on your machine. [Download Python](https://www.python.org/downloads/)
To train the genetic agent, run the following command:

### Clone the repository
```bash
git clone https://github.com/CogitoNTNU/TetrisAI.git
cd TetrisAI
python main.py train
```

## Contributors
## Testing

To run the test suite, run the following command from the root directory of the project:

```bash
python -m pytest
```

## Team

The team behind this project is a group of students at NTNU in Trondheim, Norway, developed during the spring semester of 2024. The team consists of:

<table align="center">
<tr>
Expand Down Expand Up @@ -63,3 +116,5 @@ cd TetrisAI
</td>
</tr>
</table>

![Group picture](docs/img/Team.png)
9 changes: 9 additions & 0 deletions demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
from src.agents.agent_factory import create_agent
from src.game.tetris import Tetris
from src.game.TetrisGameManager import *

if __name__ == "__main__":
board = Tetris()
agent = create_agent("heuristic")
manager = TetrisGameManager(board)
manager.startDemo(agent)
Binary file removed docs - Shortcut.lnk
Binary file not shown.
Binary file added docs/img/Logo.webp
Binary file not shown.
Binary file added docs/img/Team.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
92 changes: 45 additions & 47 deletions main.py
Original file line number Diff line number Diff line change
@@ -1,54 +1,52 @@
import argparse
from src.game.tetris import Tetris
from src.game.TetrisGameManager import *
from src.agents.agent import Agent, play_game
from src.game.TetrisGameManager import TetrisGameManager
from src.agents.agent_factory import create_agent
from src.agents.heuristic import (
utility
)
from src.agents.heuristic_trainer import train
from src.agents.geneticAlgAgentJon import GeneticAlgAgentJM

def test():
# algAgent = GeneticAlgAgentJM()
# algAgent.number_of_selection(2)
# print(algAgent.getBestPop())

def self_play():
"""Start a self-playing Tetris game."""
board = Tetris()
agent = create_agent("heuristic")
manager = TetrisGameManager(board)
manager.startGame()
def demonstrate(agent_type: str):
"""Demonstrate gameplay with a specified agent."""
board = Tetris()
agent = create_agent(agent_type)
manager = TetrisGameManager(board)
manager.startDemo(agent)

def train_genetic_algorithm(population_size: int = 10):
"""Train the genetic algorithm agent."""
alg_agent = GeneticAlgAgentJM()
alg_agent.number_of_selection(population_size)
print(alg_agent.getBestPop())
def main():
"""Main entry point to handle command-line arguments."""
parser = argparse.ArgumentParser(description="Tetris Game with different options.")
subparsers = parser.add_subparsers(dest="command", help="Sub-command help")
# Self-play parser
subparsers.add_parser("play", help="Start a self-playing Tetris game.")
# Demonstrate parser
demonstrate_parser = subparsers.add_parser(
"agent", help="Demonstrate gameplay with a specific agent."
)
demonstrate_parser.add_argument(
"agent", type=str, help="Agent type for demonstration."
)
# Genetic algorithm training parser
train_parser = subparsers.add_parser("train", help="Train the genetic algorithm agent.")
train_parser.add_argument(
"--population_size", type=int, default=10, help="Population size for the genetic algorithm."
)
# Parse the arguments
args = parser.parse_args()
# Route commands to the appropriate functions
if args.command == "play":
self_play()
elif args.command == "agent":
demonstrate(args.agent)
elif args.command == "train":
train_genetic_algorithm(args.population_size)
else:
parser.print_help()
if __name__ == "__main__":

# game = Tetris()
# agent: Agent = create_agent("heuristic")
# sum_rows_removed = 0
# for i in range(10):
# end_board = play_game(agent, game, 7)
# end_board.printBoard()
# sum_rows_removed += end_board.rowsRemoved

# print(f"Average rows removed: {sum_rows_removed / 10}")

# possible_moves = game.getPossibleBoards()
# for boards in possible_moves:
# print(utility(boards, 0, -1, 0, 0, 0))
# boards.printBoard()

# board = Tetris()
# manager = TetrisGameManager(board)
# agent = create_agent("heuristic")

# # manager.startGame()

# # train()


# algAgent = GeneticAlgAgentJM()
# algAgent.number_of_selection(2)
# print(algAgent.getBestPop())

test()


# cProfile.run('main()', 'restats')
main()
8 changes: 3 additions & 5 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
colorama==0.4.6
iniconfig==2.0.0
numpy==1.26.4
packaging==24.0
pluggy==1.4.0
pluggy==1.5.0
pygame==2.5.2
pynput==1.7.6
pytest==8.1.1
six==1.16.0
pytest==8.2.0
18 changes: 13 additions & 5 deletions src/agents/geneticAlgAgentJon.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,9 @@ def play_game(self, agg_height, max_height, lines_cleared, bumpiness, holes):

move += 1
# Advance the game by one frame
board.doAction(Action.SOFT_DROP)
board.doAction(Action.SOFT_DROP)
if board.blockHasLanded:
board.updateBoard()
#board.printBoard()

total_cleared += board.rowsRemoved
Expand All @@ -103,7 +105,8 @@ def replace_30_percent(self, pop_list: list[list[list[float], float]]) -> list[l
# TODO create method for fetching a random 10%, and finds the two with highest lines cleared, and makes a child (with 5% chance of mutation)
def paring_pop(self, pop_list: list[list[list[float], float]]) -> list[list[float], float]:
# Gets the number of pops to select
num_pops_to_select = int(len(pop_list) * 0.1)
# num_pops_to_select = int(len(pop_list) * 0.1)
num_pops_to_select = int(len(pop_list) * 0.5)

# Get a sample of pops based on the previous number
random_pop_sample = random.sample(pop_list, num_pops_to_select)
Expand All @@ -113,13 +116,18 @@ def paring_pop(self, pop_list: list[list[list[float], float]]) -> list[list[floa

# Gets the child pop of the two pops
new_pop = self.fitness_crossover(highest_values[0], highest_values[1])

norm = np.linalg.norm(new_pop[0])
if norm == 0:
norm = 1e-10 # or some small constant
new_pop[0] = [i / norm for i in new_pop[0]]
new_pop[0] = [i / norm for i in new_pop[0]]

# Mutate 5% of children pops
if random.randrange(0,1000)/1000 < 0.2:
if random.randrange(0,1000)/1000 < 0.05:
random_parameter = int(random.randint(0,4))
new_pop[0][random_parameter] = (random.randrange(-200, 200)/1000) * new_pop[0][random_parameter]

new_pop[0] = (new_pop[0] / np.linalg.norm(new_pop[0])).tolist()
#new_pop[0] = (new_pop[0] / np.linalg.norm(new_pop[0])).tolist()

return new_pop

Expand Down
4 changes: 2 additions & 2 deletions src/agents/random_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
class RandomAgent(Agent):
"""Random agent that selects a random move from the list of possible moves"""

def result(self, board: Tetris) -> Action:
def result(self, board: Tetris) -> list[Action]:
possible_actions = get_all_actions()
return choice(possible_actions)
return [choice(possible_actions)]


2 changes: 1 addition & 1 deletion src/game/tetris.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from src.game.block import Block

DEMO_SLEEP = 0.05
DEMO_SLEEP = 0


class Action(Enum):
Expand Down

0 comments on commit e8a549d

Please sign in to comment.