'gemma2':_("Google Gemma 2 is a high-performing and efficient model available in three sizes: 2B, 9B, and 27B."),
'qwen2.5':_("Qwen2.5 models are pretrained on Alibaba's latest large-scale dataset, encompassing up to 18 trillion tokens. The model supports up to 128K tokens and has multilingual support."),
'phi3.5':_("A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger sized models."),
'nemotron-mini':_("A commercial-friendly small language model by NVIDIA optimized for roleplay, RAG QA, and function calling."),
'mistral-small':_("Mistral Small is a lightweight model designed for cost-effective use in tasks like translation and summarization."),
'deepseek-coder-v2':_("An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks."),
'mistral':_("The 7B model released by Mistral AI, updated to version 0.3."),
'mixtral':_("A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes."),
'codegemma':_("CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks like fill-in-the-middle code completion, code generation, natural language understanding, mathematical reasoning, and instruction following."),
'command-r':_("Command R is a Large Language Model optimized for conversational interaction and long context tasks."),
'command-r-plus':_("Command R+ is a powerful, scalable large language model purpose-built to excel at real-world enterprise use cases."),
'llava':_("🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6."),
'dolphin-mixtral':_("Uncensored, 8x7b and 8x22b fine-tuned models based on the Mixtral mixture of experts models that excels at coding tasks. Created by Eric Hartford."),
'dolphin-llama3':_("Dolphin 2.9 is a new model with 8B and 70B sizes by Eric Hartford based on Llama 3 that has a variety of instruction, conversational, and coding skills."),
'qwen2.5-coder':_("The latest series of Code-Specific Qwen models, with significant improvements in code generation, code reasoning, and code fixing."),
'openchat':_("A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-0106."),
'wizardlm2':_("State of the art large language model from Microsoft AI with improved performance on complex chat, multilingual, reasoning and agent use cases."),
'stable-code':_("Stable Code 3B is a coding model with instruct and code completion variants on par with models such as Code Llama 7B that are 2.5x larger."),
'qwen2-math':_("Qwen2 Math is a series of specialized math language models built upon the Qwen2 LLMs, which significantly outperforms the mathematical capabilities of open-source models and even closed-source models (e.g., GPT4o)."),
'stablelm2':_("Stable LM 2 is a state-of-the-art 1.6B and 12B parameter language model trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch."),
'llama3-gradient':_("This model extends LLama-3 8B's context length from 8k to over 1m tokens."),
'wizard-math':_("Model focused on math and logic problems"),
'glm4':_("A strong multi-lingual general language model with competitive performance to Llama 3."),
'neural-chat':_("A fine-tuned model based on Mistral with good coverage of domain and language."),
'reflection':_("A high-performing model trained with a new technique called Reflection-tuning that teaches a LLM to detect mistakes in its reasoning and correct course."),
'llama3-chatqa':_("A model from NVIDIA based on Llama 3 that excels at conversational question answering (QA) and retrieval-augmented generation (RAG)."),
'mistral-large':_("Mistral Large 2 is Mistral's new flagship model that is significantly more capable in code generation, mathematics, and reasoning with 128k context window and support for dozens of languages."),
'moondream':_("moondream2 is a small vision language model designed to run efficiently on edge devices."),
'xwinlm':_("Conversational model based on Llama 2 that performs competitively on various benchmarks."),
'solar':_("A compact, yet powerful 10.7B large language model designed for single-turn conversation."),
'orca2':_("Orca 2 is built by Microsoft research, and are a fine-tuned version of Meta's Llama 2 models. The model is designed to excel particularly in reasoning."),
'stable-beluga':_("Llama 2 based model fine tuned on an Orca-style dataset. Originally called Free Willy."),
'dolphin-phi':_("2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research."),
'yi-coder':_("Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters."),
'llava-phi3':_("A new small LLaVA model fine-tuned from Phi 3 Mini."),
'internlm2':_("InternLM2.5 is a 7B parameter model tailored for practical scenarios with outstanding reasoning capability."),
'yarn-mistral':_("An extension of Mistral to support context windows of 64K or 128K."),
'llama-pro':_("An expansion of Llama 2 that specializes in integrating both general language understanding and domain-specific knowledge, particularly in programming and mathematics."),
'llama3-groq-tool-use':_("A series of models from Groq that represent a significant advancement in open-source AI capabilities for tool use/function calling."),
'magicoder':_("🎩 Magicoder is a family of 7B parameter models trained on 75K synthetic instruction data using OSS-Instruct, a novel approach to enlightening LLMs with open-source code snippets."),
'stablelm-zephyr':_("A lightweight chat model allowing accurate, and responsive output without requiring high-end hardware."),
'codebooga':_("A high-performing code instruct model created by merging two existing code models."),
'deepseek-v2.5':_("An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct."),
'bespoke-minicheck':_("A state-of-the-art fact-checking model developed by Bespoke Labs."),