Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LogisticRegressionL1L2GL - Always same classification output. #11

Open
sOuLsK opened this issue May 30, 2015 · 1 comment
Open

LogisticRegressionL1L2GL - Always same classification output. #11

sOuLsK opened this issue May 30, 2015 · 1 comment

Comments

@sOuLsK
Copy link

sOuLsK commented May 30, 2015

Hello,

I've been trying to use your LogisticRegressionL1L2GL for a binary classification problem. I've done so using the scikit-learn Pipeline, in which the previous steps, before using the LogisticRegressionL1L2GL, are a countVectorizer and a tfidftransformer. To be able to utilize your LogisticRegressionL1L2GL in this setup I have to convert the csr_matrix format that the previous step outputs into an array to be used in the X attribute in the fit and predict methods, which I do. However, the results of the classification is always of one class. I've looked into the code and I've found that the probability given to each instance is 0.5 and as such the classification given is 1.0.

Could I be doing something wrong, and if so, is there any insight you could provide?

Thanks in advance

@tomlof
Copy link
Collaborator

tomlof commented Mar 6, 2017

I'm terribly sorry for the very late response.

Currently, the estimators in pylearn-parsimony do not conform to scikit-learn. We have discussed this at several occasions, but have not yet found a good way to make it work, nor to implement it. There are several problems with this, the main one being that the scikit-learn estimators are required to conform to certain method signatures (i.e. they must accept certain arguments). In pylearn-parsimony, the arguments may differ from these requirements, and is generally much more liberal in terms of the arguments that are accepted. This can certainly be overcome, but we have currently not yet done it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants