Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue135 forecast uncertainty 2 #693

Open
wants to merge 54 commits into
base: master
Choose a base branch
from

Conversation

dhblum
Copy link
Collaborator

@dhblum dhblum commented Oct 18, 2024

This is for #135. A test PR instead of #482 to include changes from Wanfu.

laura-zabala and others added 30 commits August 19, 2022 12:10
ErrorEmulator.py: function to generate an error with an AR model in the prediction horizon.
SimulateError.ipynb: notebook to simulate the error for different uncertainty scenarios.
The predict_error method is imported in forecaster.py.
The arguments required by "predict_error" are propagated in get_forecast: F0, K0, F, K, mu. The hp is already defined (horizon). Default values are included for these parameters for the deterministic scenario.
predict_error is updated to predict_error_AR to specify the error model
…into issue135_forecastUncertainty

# Conflicts:
#	forecast/forecaster.py
…nario, and prevent the weather forecast uncertainty from being overwritten in get_forecast after setting it in the scenario.
…n and introduced two new tests: test_forecast_temperature_are_within_range and test_forecast_solar_radiation_are_positive for improved criteria checking.
…o issue135_forecastUncertainty

merge laura's revise
@dhblum dhblum requested a review from HWalnum February 27, 2025 15:23
@dhblum
Copy link
Collaborator Author

dhblum commented Feb 27, 2025

@HWalnum Here's the PR for adding weather forecast uncertainty. If you can test out, that'd be great. Note, I'm realizing the README doesn't say, but the values that the temp and radiation uncertainty specs in the /scenario API can take are 'low', 'medium', or 'high'. We need to add this information, and also to the user guide.

@HWalnum
Copy link
Contributor

HWalnum commented Feb 28, 2025

@dhblum I have done a local test of the forecast uncertainty module. I have not done a review of the implementation. Did you want that as well? You have probably done this already...

In general, it seems to work well. I tested it on a "private" testcase where we have earlier have compared perfect forecast vs an actual weather forecast. Looking at the influence on the performance of the controller (change in mpc objective achievement), the "high" scenario (I tested only with the same scenario for both parameters) gets the most similar results to the "actual weather forecast" case. Looking at the error variance, the "low" scenario for temperature looks more similar to the actual forecast, while the high looks more similar for solar radiation. That said, that is only one case, so I have no objections to the applied distributions.

Above, you mention a user manual. I could not find that. Is it in the repo?
Looking for this, I found the readme.md in the forecast folder. this should probably be updated.

Lastly, on a separate note. I think it is worth mentioning in the main readme, that when one do changes in the local version of BOPTEST, one needs to add a "--build" flag to the docker compose up statement.

@dhblum
Copy link
Collaborator Author

dhblum commented Mar 3, 2025

Thanks so much @HWalnum for this nice testing, even using an MPC controller and comparing to previous work you've done. The user guide is actually still only on the website branch gh-pages-custom, I update it with each release. Noted on the forecast readme. And also on the --build command for rebuilding the images if wanting to update with changes, I could add a note to the main readme.

Regarding the parameters for the uncertainty model, they are based on the recently published paper. @laura-zabala I'm actually going back and looking at the parameters of models, see the table below. The GHI parameters are from the paper directly, and are relatively easy to see a clear "low", "medium", "high" scenario from each city. The temperature aren't as clear, some have lower mu with higher sigma, and vice versa. So I understand its tougher to choose a low, medium, and high. But can you confirm that the current chosen parameters in the temperature model are based on the latest published parameters? I think it would be good to add a brief note in the design doc description rationale for how the parameters were selected.

Temperature AR1 Parameters

  Berkeley Leuven Berlin Oslo "Low" "Medium" "High"
ac factor 0.939 0.918 0.903 0.924 0.92 0.93 0.95
mean_resid 0.008 0.045 0.047 0.017 0 0 -0.015
std_resid 0.711 0.771 0.575 0.85 0.4 0.6 0.7
mean_0 -0.18   0.37 -0.15 0 0.15 -0.58
std_0 1.4   1.1 1.8 0.6 1.2 1.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants