Skip to content

Commit 50ddb3e

Browse files
authored
chore(model gallery): add nvidia_llama-3_3-nemotron-super-49b-v1 (mudler#5041)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
1 parent 5eebfee commit 50ddb3e

File tree

1 file changed

+19
-0
lines changed

1 file changed

+19
-0
lines changed

gallery/index.yaml

+19
Original file line numberDiff line numberDiff line change
@@ -1056,6 +1056,25 @@
10561056
- filename: ReadyArt_Forgotten-Safeword-70B-3.6-Q4_K_M.gguf
10571057
sha256: bd3a082638212064899db1afe29bf4c54104216e662ac6cc76722a21bf91967e
10581058
uri: huggingface://bartowski/ReadyArt_Forgotten-Safeword-70B-3.6-GGUF/ReadyArt_Forgotten-Safeword-70B-3.6-Q4_K_M.gguf
1059+
- !!merge <<: *llama33
1060+
name: "nvidia_llama-3_3-nemotron-super-49b-v1"
1061+
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/1613114437487-60262a8e0703121c822a80b6.png
1062+
urls:
1063+
- https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1
1064+
- https://huggingface.co/bartowski/nvidia_Llama-3_3-Nemotron-Super-49B-v1-GGUF
1065+
description: |
1066+
Llama-3.3-Nemotron-Super-49B-v1 is a large language model (LLM) which is a derivative of Meta Llama-3.3-70B-Instruct (AKA the reference model). It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. The model supports a context length of 128K tokens.
1067+
1068+
Llama-3.3-Nemotron-Super-49B-v1 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the model’s memory footprint, enabling larger workloads, as well as fitting the model on a single GPU at high workloads (H200). This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff.
1069+
1070+
The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using REINFORCE (RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and Online RPO checkpoints. For more details on how the model was trained, please see this blog.
1071+
overrides:
1072+
parameters:
1073+
model: nvidia_Llama-3_3-Nemotron-Super-49B-v1-Q4_K_M.gguf
1074+
files:
1075+
- filename: nvidia_Llama-3_3-Nemotron-Super-49B-v1-Q4_K_M.gguf
1076+
sha256: d3fc12f4480cad5060f183d6c186ca47d800509224632bb22e15791711950524
1077+
uri: huggingface://bartowski/nvidia_Llama-3_3-Nemotron-Super-49B-v1-GGUF/nvidia_Llama-3_3-Nemotron-Super-49B-v1-Q4_K_M.gguf
10591078
- &rwkv
10601079
url: "github:mudler/LocalAI/gallery/rwkv.yaml@master"
10611080
name: "rwkv-6-world-7b"

0 commit comments

Comments
 (0)