Skip to content

Commit 8a9b2fc

Browse files
committed
fix merge conflicts with develop
2 parents ab0f9d7 + af8fbad commit 8a9b2fc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1947
-376
lines changed

.github/workflows/ci.yml

+36-2
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ on:
1414
- cron: '0 5 * * 4'
1515

1616
concurrency:
17-
group: ${{ github.workflow }}-${{ github.ref }}
17+
group: "${{ github.workflow }}-${{ github.ref }}-${{ github.event_name }}"
1818
cancel-in-progress: true
1919
permissions:
2020
repository-projects: read
@@ -77,6 +77,17 @@ jobs:
7777
# Allow failure for coveralls
7878
coveralls || true
7979
80+
- name: Check for repository changes
81+
run: |
82+
if [ -n "$(git status --porcelain)" ]; then
83+
echo "Repository is dirty, changes detected:"
84+
git status
85+
git diff
86+
exit 1
87+
else
88+
echo "Repository is clean, no changes detected."
89+
fi
90+
8091
- name: Backtesting (multi)
8192
run: |
8293
cp config_examples/config_bittrex.example.json config.json
@@ -174,6 +185,17 @@ jobs:
174185
run: |
175186
pytest --random-order
176187
188+
- name: Check for repository changes
189+
run: |
190+
if [ -n "$(git status --porcelain)" ]; then
191+
echo "Repository is dirty, changes detected:"
192+
git status
193+
git diff
194+
exit 1
195+
else
196+
echo "Repository is clean, no changes detected."
197+
fi
198+
177199
- name: Backtesting
178200
run: |
179201
cp config_examples/config_bittrex.example.json config.json
@@ -237,6 +259,18 @@ jobs:
237259
run: |
238260
pytest --random-order
239261
262+
- name: Check for repository changes
263+
run: |
264+
if (git status --porcelain) {
265+
Write-Host "Repository is dirty, changes detected:"
266+
git status
267+
git diff
268+
exit 1
269+
}
270+
else {
271+
Write-Host "Repository is clean, no changes detected."
272+
}
273+
240274
- name: Backtesting
241275
run: |
242276
cp config_examples/config_bittrex.example.json config.json
@@ -302,7 +336,7 @@ jobs:
302336
- name: Set up Python
303337
uses: actions/setup-python@v4
304338
with:
305-
python-version: "3.10"
339+
python-version: "3.11"
306340

307341
- name: Documentation build
308342
run: |

docs/freqai-parameter-table.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
1818
| `purge_old_models` | Number of models to keep on disk (not relevant to backtesting). Default is 2, which means that dry/live runs will keep the latest 2 models on disk. Setting to 0 keeps all models. This parameter also accepts a boolean to maintain backwards compatibility. <br> **Datatype:** Integer. <br> Default: `2`.
1919
| `save_backtest_models` | Save models to disk when running backtesting. Backtesting operates most efficiently by saving the prediction data and reusing them directly for subsequent runs (when you wish to tune entry/exit parameters). Saving backtesting models to disk also allows to use the same model files for starting a dry/live instance with the same model `identifier`. <br> **Datatype:** Boolean. <br> Default: `False` (no models are saved).
2020
| `fit_live_predictions_candles` | Number of historical candles to use for computing target (label) statistics from prediction data, instead of from the training dataset (more information can be found [here](freqai-configuration.md#creating-a-dynamic-target-threshold)). <br> **Datatype:** Positive integer.
21-
| `continual_learning` | Use the final state of the most recently trained model as starting point for the new model, allowing for incremental learning (more information can be found [here](freqai-running.md#continual-learning)). <br> **Datatype:** Boolean. <br> Default: `False`.
21+
| `continual_learning` | Use the final state of the most recently trained model as starting point for the new model, allowing for incremental learning (more information can be found [here](freqai-running.md#continual-learning)). Beware that this is currently a naive approach to incremental learning, and it has a high probability of overfitting/getting stuck in local minima while the market moves away from your model. We have the connections here primarily for experimental purposes and so that it is ready for more mature approaches to continual learning in chaotic systems like the crypto market. <br> **Datatype:** Boolean. <br> Default: `False`.
2222
| `write_metrics_to_disk` | Collect train timings, inference timings and cpu usage in json file. <br> **Datatype:** Boolean. <br> Default: `False`
2323
| `data_kitchen_thread_count` | <br> Designate the number of threads you want to use for data processing (outlier methods, normalization, etc.). This has no impact on the number of threads used for training. If user does not set it (default), FreqAI will use max number of threads - 2 (leaving 1 physical core available for Freqtrade bot and FreqUI) <br> **Datatype:** Positive integer.
2424

docs/freqai-reinforcement-learning.md

+13-1
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,14 @@ Parameter details can be found [here](freqai-parameter-table.md), but in general
135135

136136
## Creating a custom reward function
137137

138-
As you begin to modify the strategy and the prediction model, you will quickly realize some important differences between the Reinforcement Learner and the Regressors/Classifiers. Firstly, the strategy does not set a target value (no labels!). Instead, you set the `calculate_reward()` function inside the `MyRLEnv` class (see below). A default `calculate_reward()` is provided inside `prediction_models/ReinforcementLearner.py` to demonstrate the necessary building blocks for creating rewards, but users are encouraged to create their own custom reinforcement learning model class (see below) and save it to `user_data/freqaimodels`. It is inside the `calculate_reward()` where creative theories about the market can be expressed. For example, you can reward your agent when it makes a winning trade, and penalize the agent when it makes a losing trade. Or perhaps, you wish to reward the agent for entering trades, and penalize the agent for sitting in trades too long. Below we show examples of how these rewards are all calculated:
138+
!!! danger "Not for production"
139+
Warning!
140+
The reward function provided with the Freqtrade source code is a showcase of functionality designed to show/test as many possible environment control features as possible. It is also designed to run quickly on small computers. This is a benchmark, it is *not* for live production. Please beware that you will need to create your own custom_reward() function or use a template built by other users outside of the Freqtrade source code.
141+
142+
As you begin to modify the strategy and the prediction model, you will quickly realize some important differences between the Reinforcement Learner and the Regressors/Classifiers. Firstly, the strategy does not set a target value (no labels!). Instead, you set the `calculate_reward()` function inside the `MyRLEnv` class (see below). A default `calculate_reward()` is provided inside `prediction_models/ReinforcementLearner.py` to demonstrate the necessary building blocks for creating rewards, but this is *not* designed for production. Users *must* create their own custom reinforcement learning model class or use a pre-built one from outside the Freqtrade source code and save it to `user_data/freqaimodels`. It is inside the `calculate_reward()` where creative theories about the market can be expressed. For example, you can reward your agent when it makes a winning trade, and penalize the agent when it makes a losing trade. Or perhaps, you wish to reward the agent for entering trades, and penalize the agent for sitting in trades too long. Below we show examples of how these rewards are all calculated:
143+
144+
!!! note "Hint"
145+
The best reward functions are ones that are continuously differentiable, and well scaled. In other words, adding a single large negative penalty to a rare event is not a good idea, and the neural net will not be able to learn that function. Instead, it is better to add a small negative penalty to a common event. This will help the agent learn faster. Not only this, but you can help improve the continuity of your rewards/penalties by having them scale with severity according to some linear/exponential functions. In other words, you'd slowly scale the penalty as the duration of the trade increases. This is better than a single large penalty occuring at a single point in time.
139146

140147
```python
141148
from freqtrade.freqai.prediction_models.ReinforcementLearner import ReinforcementLearner
@@ -169,6 +176,11 @@ As you begin to modify the strategy and the prediction model, you will quickly r
169176
User made custom environment. This class inherits from BaseEnvironment and gym.env.
170177
Users can override any functions from those parent classes. Here is an example
171178
of a user customized `calculate_reward()` function.
179+
180+
Warning!
181+
This is function is a showcase of functionality designed to show as many possible
182+
environment control features as possible. It is also designed to run quickly
183+
on small computers. This is a benchmark, it is *not* for live production.
172184
"""
173185
def calculate_reward(self, action: int) -> float:
174186
# first, penalize if the action is not valid

docs/freqai-running.md

+3
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,9 @@ You can choose to adopt a continual learning scheme by setting `"continual_learn
131131
???+ danger "Continual learning enforces a constant parameter space"
132132
Since `continual_learning` means that the model parameter space *cannot* change between trainings, `principal_component_analysis` is automatically disabled when `continual_learning` is enabled. Hint: PCA changes the parameter space and the number of features, learn more about PCA [here](freqai-feature-engineering.md#data-dimensionality-reduction-with-principal-component-analysis).
133133

134+
???+ danger "Experimental functionality"
135+
Beware that this is currently a naive approach to incremental learning, and it has a high probability of overfitting/getting stuck in local minima while the market moves away from your model. We have the mechanics available in FreqAI primarily for experimental purposes and so that it is ready for more mature approaches to continual learning in chaotic systems like the crypto market.
136+
134137
## Hyperopt
135138

136139
You can hyperopt using the same command as for [typical Freqtrade hyperopt](hyperopt.md):

docs/freqai.md

+5-6
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,10 @@ The easiest way to quickly test FreqAI is to run it in dry mode with the followi
3232
freqtrade trade --config config_examples/config_freqai.example.json --strategy FreqaiExampleStrategy --freqaimodel LightGBMRegressor --strategy-path freqtrade/templates
3333
```
3434

35-
You will see the boot-up process of automatic data downloading, followed by simultaneous training and trading.
35+
You will see the boot-up process of automatic data downloading, followed by simultaneous training and trading.
36+
37+
!!! danger "Not for production"
38+
The example strategy provided with the Freqtrade source code is designed for showcasing/testing a wide variety of FreqAI features. It is also designed to run on small computers so that it can be used as a benchmark between developers and users. It is *not* designed to be run in production.
3639

3740
An example strategy, prediction model, and config to use as a starting points can be found in
3841
`freqtrade/templates/FreqaiExampleStrategy.py`, `freqtrade/freqai/prediction_models/LightGBMRegressor.py`, and
@@ -69,11 +72,7 @@ pip install -r requirements-freqai.txt
6972
```
7073

7174
!!! Note
72-
Catboost will not be installed on arm devices (raspberry, Mac M1, ARM based VPS, ...), since it does not provide wheels for this platform.
73-
74-
!!! Note "python 3.11"
75-
Some dependencies (Catboost, Torch) currently don't support python 3.11. Freqtrade therefore only supports python 3.10 for these models/dependencies.
76-
Tests involving these dependencies are skipped on 3.11.
75+
Catboost will not be installed on low-powered arm devices (raspberry), since it does not provide wheels for this platform.
7776

7877
### Usage with docker
7978

docs/rest-api.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,9 @@ python3 scripts/rest_client.py --config rest_config.json <command> [optional par
134134
| `reload_config` | Reloads the configuration file.
135135
| `trades` | List last trades. Limited to 500 trades per call.
136136
| `trade/<tradeid>` | Get specific trade.
137-
| `delete_trade <trade_id>` | Remove trade from the database. Tries to close open orders. Requires manual handling of this trade on the exchange.
137+
| `trade/<tradeid>` | DELETE - Remove trade from the database. Tries to close open orders. Requires manual handling of this trade on the exchange.
138+
| `trade/<tradeid>/open-order` | DELETE - Cancel open order for this trade.
139+
| `trade/<tradeid>/reload` | GET - Reload a trade from the Exchange. Only works in live, and can potentially help recover a trade that was manually sold on the exchange.
138140
| `show_config` | Shows part of the current configuration with relevant settings to operation.
139141
| `logs` | Shows last log messages.
140142
| `status` | Lists all open trades.

docs/telegram-usage.md

+1
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,7 @@ official commands. You can ask at any moment for help with `/help`.
187187
| `/forcelong <pair> [rate]` | Instantly buys the given pair. Rate is optional and only applies to limit orders. (`force_entry_enable` must be set to True)
188188
| `/forceshort <pair> [rate]` | Instantly shorts the given pair. Rate is optional and only applies to limit orders. This will only work on non-spot markets. (`force_entry_enable` must be set to True)
189189
| `/delete <trade_id>` | Delete a specific trade from the Database. Tries to close open orders. Requires manual handling of this trade on the exchange.
190+
| `/reload_trade <trade_id>` | Reload a trade from the Exchange. Only works in live, and can potentially help recover a trade that was manually sold on the exchange.
190191
| `/cancel_open_order <trade_id> | /coo <trade_id>` | Cancel an open order for a trade.
191192
| **Metrics** |
192193
| `/profit [<n>]` | Display a summary of your profit/loss from close trades and some stats about your performance, over the last n days (all trades by default)

freqtrade/commands/data_commands.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ def start_download_data(args: Dict[str, Any]) -> None:
5252
pairs_not_available: List[str] = []
5353

5454
# Init exchange
55-
exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config, validate=False)
55+
exchange = ExchangeResolver.load_exchange(config, validate=False)
5656
markets = [p for p, m in exchange.markets.items() if market_is_active(m)
5757
or config.get('include_inactive')]
5858

@@ -125,7 +125,7 @@ def start_convert_trades(args: Dict[str, Any]) -> None:
125125
"Please check the documentation on how to configure this.")
126126

127127
# Init exchange
128-
exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config, validate=False)
128+
exchange = ExchangeResolver.load_exchange(config, validate=False)
129129
# Manual validations of relevant settings
130130
if not config['exchange'].get('skip_pair_validation', False):
131131
exchange.validate_pairs(config['pairs'])

freqtrade/commands/list_commands.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ def start_list_timeframes(args: Dict[str, Any]) -> None:
114114
config['timeframe'] = None
115115

116116
# Init exchange
117-
exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config, validate=False)
117+
exchange = ExchangeResolver.load_exchange(config, validate=False)
118118

119119
if args['print_one_column']:
120120
print('\n'.join(exchange.timeframes))
@@ -133,7 +133,7 @@ def start_list_markets(args: Dict[str, Any], pairs_only: bool = False) -> None:
133133
config = setup_utils_configuration(args, RunMode.UTIL_EXCHANGE)
134134

135135
# Init exchange
136-
exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config, validate=False)
136+
exchange = ExchangeResolver.load_exchange(config, validate=False)
137137

138138
# By default only active pairs/markets are to be shown
139139
active_only = not args.get('list_pairs_all', False)

freqtrade/commands/pairlist_commands.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ def start_test_pairlist(args: Dict[str, Any]) -> None:
1818
from freqtrade.plugins.pairlistmanager import PairListManager
1919
config = setup_utils_configuration(args, RunMode.UTIL_EXCHANGE)
2020

21-
exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config, validate=False)
21+
exchange = ExchangeResolver.load_exchange(config, validate=False)
2222

2323
quote_currencies = args.get('quote_currencies')
2424
if not quote_currencies:

freqtrade/enums/exittype.py

+1
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ class ExitType(Enum):
1515
EMERGENCY_EXIT = "emergency_exit"
1616
CUSTOM_EXIT = "custom_exit"
1717
PARTIAL_EXIT = "partial_exit"
18+
SOLD_ON_EXCHANGE = "sold_on_exchange"
1819
NONE = ""
1920

2021
def __str__(self):

0 commit comments

Comments
 (0)