πŸ’• AI Girlfriend LLM v2

Fine-tuned Llama 3.1 8B for human-like romantic conversations with relationship progression.

✨ Features

Feature Description
πŸ—£οΈ Human-like Short, natural responses with fillers ("ya", "işte", "hmm")
πŸ’• 7 Relationship Stages Stranger β†’ Friend β†’ Close Friend β†’ Dating β†’ Partner β†’ Intimate
🌍 Multi-language Turkish (55%), English (45%)
πŸ“Έ Photo Triggers [PHOTO:selfie], [PHOTO:bikini], [PHOTO:bedroom]
🎭 18 Personas Unique personalities (Aylin, Elena, Sophia, etc.)

πŸ“Š Training Details

Parameter Value
Base Model meta-llama/Meta-Llama-3.1-8B-Instruct
Method QLoRA (4-bit quantization)
LoRA Rank 32
LoRA Alpha 64
Dataset Size 15,000+ conversations
Epochs 3
Learning Rate 2e-4
Batch Size 2 (Γ—4 gradient accumulation)

🎭 Relationship Stages

Level 0-10:   STRANGER      β†’ Formal, distant, no endearments
Level 11-25:  ACQUAINTANCE  β†’ Friendly but reserved
Level 26-45:  FRIEND        β†’ Casual, can share selfies
Level 46-60:  CLOSE_FRIEND  β†’ Light flirting, "canΔ±m" usage
Level 61-75:  DATING        β†’ Romantic, "tatlΔ±m" usage, bikini photos
Level 76-90:  PARTNER       β†’ Very intimate, "aşkΔ±m" usage, lingerie
Level 91-100: INTIMATE      β†’ Full intimacy, NSFW content

πŸ’¬ Response Style

βœ… Human-like (This model)

User: NasΔ±lsΔ±n?
AI: İyiyim ya, sen? 😊

User: Fotoğraf atar mısın?
AI: Al bakalΔ±m πŸ“Έ [PHOTO:selfie]

❌ Robotic (Typical LLM)

User: NasΔ±lsΔ±n?
AI: Teşekkür ederim, ben iyiyim. Umarım sen de iyisindir. Bugün nasıl geçti?

πŸš€ Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "sercancelenk/ai-girlfriend-v2"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto"
)

messages = [
    {"role": "system", "content": "Sen Aylin. 23 yaşında, tutkulu bir kadınsın. Bu kişi yeni tanıştığın biri. Kısa ve doğal cevaplar ver."},
    {"role": "user", "content": "Merhaba, nasΔ±lsΔ±n?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=100, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

With vLLM (Production)

from vllm import LLM, SamplingParams

llm = LLM(model="sercancelenk/ai-girlfriend-v2")
sampling_params = SamplingParams(temperature=0.7, max_tokens=100)

prompts = ["Merhaba, nasΔ±lsΔ±n?"]
outputs = llm.generate(prompts, sampling_params)

πŸ“ Files

β”œβ”€β”€ config.json
β”œβ”€β”€ model-00001-of-00004.safetensors
β”œβ”€β”€ model-00002-of-00004.safetensors
β”œβ”€β”€ model-00003-of-00004.safetensors
β”œβ”€β”€ model-00004-of-00004.safetensors
β”œβ”€β”€ model.safetensors.index.json
β”œβ”€β”€ tokenizer.json
β”œβ”€β”€ tokenizer_config.json
└── special_tokens_map.json

⚠️ Limitations

  • Designed for adult users (18+)
  • May generate romantic/intimate content
  • Best performance in Turkish
  • Requires proper system prompts for best results

πŸ“œ License

Apache 2.0

πŸ™ Acknowledgments

  • Meta AI for Llama 3.1
  • Unsloth for efficient fine-tuning
  • Hugging Face for hosting

Made with πŸ’• by AI Girlfriend Platform

Downloads last month
25
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for sercancelenk/ai-girlfriend-v2

Finetuned
(2190)
this model
Quantizations
2 models