Retired Feb 14, 2019. Superseded by GPT-2. Available only on HuggingFace as a historical artifact.“The quiet beginning no one noticed.”
GPT-1 performance data on Rival is based on blind head-to-head community voting. Overall win rate: 16.7% across 36 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 7 challenges.
These are the models that show up when GPT-1 doesn't. Or when it does, but you want a second opinion. Which is healthy.
Retired Feb 14, 2019. Superseded by GPT-2. Available only on HuggingFace as a historical artifact.“The quiet beginning no one noticed.”
GPT-1 performance data on Rival is based on blind head-to-head community voting. Overall win rate: 16.7% across 36 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 7 challenges.
These are the models that show up when GPT-1 doesn't. Or when it does, but you want a second opinion. Which is healthy.
The first large-scale transformer-based language model released by OpenAI, trained on the BooksCorpus dataset. This version is accessed via the Hugging Face model hub (`openai-community/openai-gpt`).
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
7 outputs from GPT-1
The first large-scale transformer-based language model released by OpenAI, trained on the BooksCorpus dataset. This version is accessed via the Hugging Face model hub (`openai-community/openai-gpt`).
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
7 outputs from GPT-1