You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The /v1/models response from above setup can not expose the lineage between lora and base models. In below example, root always points to the base_model.
Current Status
Base model will either use --model or --served-model-name. If user use local path, then the id and root would not be model id like OpenAI.
Lora model card information is from LoraRequest which doesn't have base_model at this moment. Technically, we can assume they are all adapters to base model. This may break later once the engine supports multiple models.
We can use root to represent model path and parent to indicate base_model for lora adapters. seems they are not OpenAI protocols, we should be able to make the change
Uh oh!
There was an error while loading. Please reload this page.
🚀 The feature, motivation and pitch
The
/v1/models
response from above setup can not expose the lineage between lora and base models. In below example, root always points to the base_model.Current Status
Base model will either use
--model
or--served-model-name
. If user use local path, then theid
androot
would not be model id like OpenAI.Lora model card information is from LoraRequest which doesn't have base_model at this moment. Technically, we can assume they are all adapters to base model. This may break later once the engine supports multiple models.
Expected
We can use
root
to represent model path andparent
to indicate base_model for lora adapters. seems they are not OpenAI protocols, we should be able to make the changeI am drafting a PR to address this issue and please help review whether above looks good.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: