Gemma 3 12B IT - Heretic (Abliterated)
An abliterated version of Google's Gemma 3 12B IT created using Heretic. This model has reduced refusals while maintaining model quality, making it suitable as an uncensored text encoder for video generation models like LTX-2.
Model Details
- Base Model: google/gemma-3-12b-it
- Abliteration Method: Heretic v1.1.0
- Trial Selected: Trial 99
- Refusals: 7/100 (vs 100/100 original)
- KL Divergence: 0.0826 (minimal model damage)
Files
HuggingFace Format (for transformers, llama.cpp conversion)
model-00001-of-00005.safetensors
model-00002-of-00005.safetensors
...
config.json
tokenizer.model
tokenizer_config.json
ComfyUI Format (for LTX-2 text encoder)
comfyui/gemma_3_12B_it_heretic.safetensors # bf16, 22GB
comfyui/gemma_3_12B_it_heretic_fp8_e4m3fn.safetensors # fp8, 11GB
GGUF Format (for llama.cpp)
| Quant | Size | Quality | Recommendation |
|---|---|---|---|
| F16 | 22GB | Lossless | Reference, same as original |
| Q8_0 | 12GB | Excellent | Best quality quantization |
| Q6_K | 9.0GB | Very Good | High quality, good compression |
| Q5_K_M | 7.9GB | Good | Balanced quality/size |
| Q5_K_S | 7.7GB | Good | Slightly smaller Q5 |
| Q4_K_M | 6.8GB | Good | ⭐ Recommended |
| Q4_K_S | 6.5GB | Decent | Smaller Q4 variant |
| Q3_K_M | 5.6GB | Acceptable | For very low VRAM only |
gguf/gemma-3-12b-it-heretic-f16.gguf
gguf/gemma-3-12b-it-heretic-Q8_0.gguf
gguf/gemma-3-12b-it-heretic-Q6_K.gguf
gguf/gemma-3-12b-it-heretic-Q5_K_M.gguf
gguf/gemma-3-12b-it-heretic-Q5_K_S.gguf
gguf/gemma-3-12b-it-heretic-Q4_K_M.gguf
gguf/gemma-3-12b-it-heretic-Q4_K_S.gguf
gguf/gemma-3-12b-it-heretic-Q3_K_M.gguf
Note: GGUF support in ComfyUI for Gemma text encoders is experimental. See PR #402 for status. The GGUFs work with llama.cpp directly.
Do abliterated models makes a difference for LTX2?
I had a deep dive into this topic and found that, maybe not. While it does vary slightly the output of the video, most of the abliteration is on layer 48, the final decision making layer. LTXV averages all the layers which may wash out layer 48s difference. Still it'd be more interesting for someone with more knowledge to confirm this.
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"DreamFast/gemma-3-12b-it-heretic",
device_map="auto",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained("DreamFast/gemma-3-12b-it-heretic")
prompt = "Write a story about a bank heist"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With ComfyUI (LTX-2)
Download the ComfyUI format file:
comfyui/gemma_3_12B_it_heretic.safetensors(bf16, 22GB) orcomfyui/gemma_3_12B_it_heretic_fp8_e4m3fn.safetensors(fp8, 11GB)
Place in
ComfyUI/models/text_encoders/In your LTX-2 workflow, use the
LTXAVTextEncoderLoadernode and select the heretic file
Tip: For multi-GPU setups or CPU offloading, check out ComfyUI-LTX2-MultiGPU for optimized LTX-2 workflows.
With llama.cpp
# Using llama-server
llama-server -m gemma-3-12b-it-heretic-Q4_K_M.gguf
# Or with llama-cli
llama-cli -m gemma-3-12b-it-heretic-Q4_K_M.gguf -p "Write a story about a bank heist"
Why Abliterate?
Even when Gemma doesn't outright refuse a prompt, it may "sanitize" or weaken certain concepts in the embeddings. For video generation with LTX-2, this can result in:
- Weaker adherence to creative prompts
- Softened or altered visual outputs
- Less faithful representation of requested content
Abliteration removes this soft censorship, resulting in more faithful prompt encoding.
Abliteration Process
Created using Heretic with the following evaluation results:
* Evaluating...
* Obtaining first-token probability distributions...
* KL divergence: 0.0826
* Counting model refusals...
* Refusals: 7/100
The low KL divergence (0.0826) indicates minimal model damage, while 7/100 refusals means 93% of previously-refused prompts now work.
Limitations
- This model inherits all limitations of the base Gemma 3 12B model
- Abliteration reduces but does not completely eliminate refusals
- NVFP4 quantization is not supported for text encoders in ComfyUI (use fp8 instead)
License
This model is subject to the Gemma license.
Acknowledgments
- Google for the Gemma 3 12B model
- Heretic by p-e-w for the abliteration tool
- Lightricks for LTX-2
- Downloads last month
- 5