G4-Meromero-31B: A High-Performance, Uncensored Finetune of Gemma 4 31B for Creative Generation

G4-Meromero-31B is a recently released, heavily finetuned derivative of the Gemma 4 31B base model. Engineered specifically for complex and unrestricted creative content generation, this model exhibits a highly favorable Kullback-Leibler Divergence (KLD) of 0.0100 and maintains a low refusal rate of 15/100, making it a potent tool for researchers and developers requiring permissive LLM behavior.

Model Overview and Design Philosophy

This model represents a significant effort in optimizing open-source LLMs for specialized use cases. Built upon the robust foundation of Gemma 4 31B, G4-Meromero-31B has undergone extensive finetuning to maximize generative capacity while minimizing inherent safety constraints. The design objective is clear: to provide an uncensored model highly suited for creative tasks where unrestricted output is desired.

Key Performance Metrics

The technical performance indicators provided highlight the model's efficiency and fidelity to the finetuning goals:

  • Base Model: Gemma 4 31B
  • Kullback-Leibler Divergence (KLD): 0.0100, indicating strong alignment