Dolphin Mistral 24B performance data on Rival is based on blind head-to-head community voting. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 0 challenges.
Dolphin Mistral 24B is an explicitly uncensored fine-tune of Mistral Small 24B by Cognitive Computations and Eric Hartford. Designed for unrestricted research use, it removes alignment-based content filtering while retaining strong instruction-following capabilities.
Use Dolphin Mistral 24B in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""cognitivecomputations/dolphin-mistral-24b-venice-edition:free" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
0 outputs from Dolphin Mistral 24B
No responses yet. We're working on it. This model is... being patient.
Try Dolphin Mistral 24B
We're not suggesting you leave Dolphin Mistral 24B. We're just... putting these here. In case you're curious. Which you are, because you scrolled this far.