Added tests for explain_model function. gssoc-extd #24
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description: This pull request introduces a series of unit tests for the explain_model and calculate_metrics functions located in the utils.py file of the explainableai directory. These tests use the pytest framework to ensure the functions work correctly for both regression and classification models.
Added Tests:
test_explain_model_regression:
Tests the explain_model function using a LinearRegression model.
Checks if the returned explanation contains keys "feature_importance" and "model_type".
Verifies that the model type matches the expected linear regression model and that the number of feature importances corresponds to the number of input features.
test_explain_model_classification:
Tests the explain_model function using a LogisticRegression model.
Checks if the explanation contains keys "feature_importance" and "model_type".
Confirms the model type matches the logistic regression model and that the number of feature importances matches the input feature count.
test_calculate_metrics_regression:
Tests the calculate_metrics function with a LinearRegression model.
Verifies that the metrics returned include "mse" (Mean Squared Error) and "r2" (R-squared).
test_calculate_metrics_classification:
Tests the calculate_metrics function using a LogisticRegression model.
Ensures the returned metrics contain "accuracy" and "f1_score".
Used Pytest to acheive this.

