You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-45Lines changed: 20 additions & 45 deletions
Original file line number
Diff line number
Diff line change
@@ -21,8 +21,6 @@ DRL implementations compatible with the environment are included in the repo as
21
21
22
22
The old [human_aware_rl](https://github.com/HumanCompatibleAI/human_aware_rl) is being deprecated and should only used to reproduce the results in the 2019 paper: *[On the Utility of Learning about Humans for Human-AI Coordination](https://arxiv.org/abs/1910.05789)* (also see our [blog post](https://bair.berkeley.edu/blog/2019/10/21/coordination/)).
23
23
24
-
For simple usage of the environment, it's worthwhile considering using [this environment wrapper](https://github.com/Stanford-ILIAD/PantheonRL).
25
-
26
24
## Research Papers using Overcooked-AI 📑
27
25
28
26
@@ -45,34 +43,20 @@ You can install the pre-compiled wheel file using pip.
45
43
```
46
44
pip install overcooked-ai
47
45
```
48
-
Note that PyPI releases are stable but infrequent. For the most up-to-date development features, build from source with `pip install -e .`.
46
+
Note that PyPI releases are stable but infrequent. For the most up-to-date development features, build from source. We recommend using [uv](https://docs.astral.sh/uv/getting-started/installation/) to install the package, so that you can use the provided lockfile to ensure no minimal package version issues.
49
47
50
48
51
49
### Building from source 🔧
52
50
53
-
It is useful to setup a conda environment with Python 3.7 (virtualenv works too):
Finally, use python setup-tools to locally install
65
-
66
-
If you just want to use the environment:
67
55
56
+
Using uv (recommended):
68
57
```
69
-
pip install -e .
70
-
```
71
-
72
-
If you also need the DRL implementations (you may have to input this in your terminal as `pip install -e '.[harl]'`):
73
-
74
-
```
75
-
pip install -e .[harl]
58
+
uv venv
59
+
uv sync
76
60
```
77
61
78
62
@@ -84,21 +68,15 @@ When building from source, you can verify the installation by running the Overco
84
68
python testing/overcooked_test.py
85
69
```
86
70
87
-
To check whether the `humam_aware_rl` is installed correctly, you can run the following command from the `src/human_aware_rl` directory:
88
-
89
-
```
90
-
$ ./run_tests.sh
91
-
```
92
71
93
-
⚠️**Be sure to change your CWD to the human_aware_rl directory before running the script, as the test script uses the CWD to dynamically generate a path to save temporary training runs/checkpoints. The testing script will fail if not being run from the correct directory.**
94
72
95
-
This will run all tests belonging to the human_aware_rl module. _These tests don't work anymore out of the box, due to package version issues_: if you fix them, feel free to make a PR. You can checkout the README in the submodule for instructions of running target-specific tests. This can be initiated from any directory.
96
73
97
74
If you're thinking of using the planning code extensively, you should run the full testing suite that verifies all of the Overcooked accessory tools (this can take 5-10 mins):
-`game.py`: The main logic of the game. State transitions are handled by overcooked.Gridworld object embedded in the game environment
103
+
-`move_agents.py`: A script that simplifies copying checkpoints to [agents](src/overcooked_demo/server/static/assets/agents/) directory. Instruction of how to use can be found inside the file or by running `python move_agents.py -h`
104
+
105
+
`up.sh`: Shell script to spin up the Docker server that hosts the game
106
+
107
+
`human_aware_rl` contains (NOTE: this is not supported anymore, see bottom of the README for more info):
121
108
122
109
`ppo/`:
123
110
-`ppo_rllib.py`: Primary module where code for training a PPO agent resides. This includes an rllib compatible wrapper on `OvercookedEnv`, utilities for converting rllib `Policy` classes to Overcooked `Agent`s, as well as utility functions and callbacks
-`game.py`: The main logic of the game. State transitions are handled by overcooked.Gridworld object embedded in the game environment
150
-
-`move_agents.py`: A script that simplifies copying checkpoints to [agents](src/overcooked_demo/server/static/assets/agents/) directory. Instruction of how to use can be found inside the file or by running `python move_agents.py -h`
151
-
152
-
`up.sh`: Shell script to spin up the Docker server that hosts the game
153
-
154
-
155
-
## Python Visualizations 🌠
133
+
## Raw Data :ledger:
156
134
157
-
See [this Google Colab](https://colab.research.google.com/drive/1AAVP2P-QQhbx6WTOnIG54NXLXFbO7y6n#scrollTo=Z1RBlqADnTDw) for some sample code for visualizing trajectories in python.
135
+
The raw data used during BC training is >100 MB, which makes it inconvenient to distribute via git. The code uses pickled dataframes for training and testing, but in case one needs to original data it can be found [here](https://drive.google.com/drive/folders/1aGV8eqWeOG5BMFdUcVoP2NHU_GFPqi57?usp=share_link)
158
136
159
-
We have incorporated a [notebook](Overcooked%20Tutorial.ipynb) that guides users on the process of training, loading, and evaluating agents. Ideally, we would like to enable users to execute the notebook in Google Colab; however, due to Colab's default kernel being Python 3.10 and our repository being optimized for Python 3.7, some functions are presently incompatible with Colab. To provide a seamless experience, we have pre-executed all the cells in the notebook, allowing you to view the expected output when running it locally following the appropriate setup.
137
+
## Deprecated: Behavior Cloning and Reinforcement Learning
160
138
161
-
Overcooked_demo can also start an interactive game in the browser for visualizations. Details can be found in its [README](src/overcooked_demo/README.md)
139
+
We used to include code for training BC and PPO agents in the `human_aware_rl` module. This is now deprecated, because of package version issues which are hard to fix. See this [issue](https://github.com/HumanCompatibleAI/overcooked_ai/issues/162) for more details.
162
140
163
-
## Raw Data :ledger:
164
141
165
-
The raw data used in training is >100 MB, which makes it inconvenient to distribute via git. The code uses pickled dataframes for training and testing, but in case one needs to original data it can be found [here](https://drive.google.com/drive/folders/1aGV8eqWeOG5BMFdUcVoP2NHU_GFPqi57?usp=share_link)
166
142
167
143
## Further Issues and questions ❓
168
144
169
-
If you have issues or questions, you can contact [Micah Carroll](https://micahcarroll.github.io) at mdc@berkeley.edu.
170
-
145
+
If you have issues or questions, you can contact [Micah Carroll](https://micahcarroll.github.io) at mdc@berkeley.edu.
0 commit comments