Skip to main content

LLMs

caution

You are currently on a page documenting the use of text completion models. Many of the latest and most popular models are chat completion models.

Unless you are specifically using more advanced prompting techniques, you are probably looking for this page instead.

LLMs are language models that take a string as input and return a string as output.

info

If you'd like to write your own LLM, see this how-to. If you'd like to contribute an integration, see Contributing integrations.

ProviderPackage
AI21LLMlangchain-ai21
AnthropicLLMlangchain-anthropic
AzureOpenAIlangchain-openai
BedrockLLMlangchain-aws
CohereLLMlangchain-cohere
FireworksLLMlangchain-fireworks
OllamaLLMlangchain-ollama
OpenAILLMlangchain-openai
TogetherLLMlangchain-together
VertexAILLMlangchain-google_vertexai
NVIDIAlangchain-nvidia

All LLMs

NameDescription
AI21 LabsSee this page for the updated ChatAI21 object.
Aleph AlphaThe Luminous series is a family of large language models.
Alibaba Cloud PAI EASMachine Learning Platform for AI of Alibaba Cloud is a machine learni...
Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for ...
AnyscaleAnyscale is a fully-managed Ray platform, on which you can build, dep...
Aphrodite EngineAphrodite is the open-source large-scale inference engine designed to...
ArceeThis notebook demonstrates how to use the Arcee class for generating ...
Azure MLAzure ML is a platform used to build, train, and deploy machine learn...
Azure OpenAIYou are currently on a page documenting the use of Azure OpenAI text ...
Baichuan LLMBaichuan Inc. (https Efficiency, Health, and Happiness.
Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development...
BananaBanana is focused on building the machine learning infrastructure.
BasetenBaseten is a Provider in the LangChain ecosystem that implements the ...
BeamCalls the Beam API wrapper to deploy and make subsequent calls to an ...
BedrockYou are currently on a page documenting the use of Amazon Bedrock mod...
BittensorBittensor is a mining network, similar to Bitcoin, that includes buil...
CerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API acces...
ChatGLMChatGLM-6B is an open bilingual language model based on General Langu...
ClarifaiClarifai is an AI Platform that provides the full AI lifecycle rangin...
Cloudflare Workers AICloudflare AI documentation listed all generative text models availab...
CohereYou are currently on a page documenting the use of Cohere models as t...
C TransformersThe C Transformers library provides Python bindings for GGML models.
CTranslate2CTranslate2 is a C++ and Python library for efficient inference with ...
DatabricksDatabricks Lakehouse Platform unifies data, analytics, and AI on one ...
DeepInfraDeepInfra is a serverless inference as a service that provides access...
DeepSparseThis page covers how to use the DeepSparse inference runtime within L...
Eden AIEden AI is revolutionizing the AI landscape by uniting the best AI pr...
ExLlamaV2ExLlamav2 is a fast inference library for running LLMs locally on mod...
FireworksYou are currently on a page documenting the use of Fireworks models a...
ForefrontAIThe Forefront platform gives you the ability to fine-tune and use ope...
FriendliFriendli enhances AI application performance and optimizes cost savin...
GigaChatThis notebook shows how to use LangChain with GigaChat.
Google AIYou are currently on a page documenting the use of Google models as t...
Google Cloud Vertex AIYou are currently on a page documenting the use of Google Vertex text...
GooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. Goose...
GPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained ...
GradientGradient allows to fine tune and get completions on LLMs with a simpl...
Huggingface EndpointsThe Hugging Face Hub is a platform with over 120k models, 20k dataset...
Hugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipelin...
IBM watsonx.aiWatsonxLLM is a wrapper for IBM watsonx.ai foundation models.
IPEX-LLMIPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e...
Javelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin A...
JSONFormerJSONFormer is a library that wraps local Hugging Face pipeline models...
KoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with...
KonkoKonko API is a fully managed Web API designed to help application dev...
Layerup SecurityThe Layerup Security integration allows you to secure your calls to a...
Llama.cppllama-cpp-python is a Python binding for llama.cpp.
LlamafileLlamafile lets you distribute and run LLMs with a single file.
LM Format EnforcerLM Format Enforcer is a library that enforces the output format of la...
ManifestThis notebook goes over how to use Manifest and LangChain.
MinimaxMinimax is a Chinese startup that provides natural language processin...
MLX Local PipelinesMLX models can be run locally through the MLXPipeline class.
ModalThe Modal cloud platform provides convenient, on-demand access to ser...
MoonshotChatMoonshot is a Chinese startup that provides LLM service for companies...
MosaicMLMosaicML offers a managed inference service. You can either use a var...
NLP CloudThe NLP Cloud serves high performance pre-trained or custom models fo...
NVIDIAThis will help you getting started with NVIDIA models. For detailed d...
oci_generative_aiOracle Cloud Infrastructure Generative AI
OCI Data Science Model Deployment EndpointOCI Data Science is a fully managed and serverless platform for data ...
OctoAIOctoAI offers easy access to efficient compute and enables users to i...
OllamaYou are currently on a page documenting the use of Ollama models as t...
OpaquePromptsOpaquePrompts is a service that enables applications to leverage the ...
OpenAIYou are currently on a page documenting the use of OpenAI text comple...
OpenLLM🦾 OpenLLM lets developers run any open-source LLMs as OpenAI-compati...
OpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can c...
OpenVINOOpenVINO™ is an open-source toolkit for optimizing and deploying AI i...
OutlinesThis will help you getting started with Outlines LLM. For detailed do...
PetalsPetals runs 100B+ language models at home, BitTorrent-style.
PipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It...
PredibasePredibase allows you to train, fine-tune, and deploy any ML model—fro...
Prediction GuardBasic LLM usage
PromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, a...
RELLMRELLM is a library that wraps local Hugging Face pipeline models for ...
ReplicateReplicate runs machine learning models in the cloud. We have a librar...
RunhouseRunhouse allows remote compute and data across environments and users...
SageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machin...
SambaNovaCloudSambaNova's SambaNova Cloud is a platform for performing inference wi...
SambaStudioSambaNova's Sambastudio is a platform that allows you to train, run b...
SolarThis community integration is deprecated. You should use ChatUpstage ...
SparkLLMSparkLLM is a large-scale cognitive model independently developed by ...
StochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a...
Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is train...
TextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running La...
Titan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, a...
Together AIYou are currently on a page documenting the use of Together AI models...
Tongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Da...
vLLMvLLM is a fast and easy-to-use library for LLM inference and serving,...
Volc Engine MaasThis notebook provides you with a guide on how to get started with Vo...
Intel Weight-Only QuantizationWeight-Only Quantization for Huggingface Models with Intel Extension ...
Writer LLMWriter is a platform to generate different language content.
Xorbits Inference (Xinference)Xinference is a powerful and versatile library designed to serve LLMs,
YandexGPTThis notebook goes over how to use Langchain with YandexGPT.
Yi01.AI, founded by Dr. Kai-Fu Lee, is a global company at the forefron...
Yuan2.0Yuan2.0 is a new generation Fundamental Large Language Model develope...

Was this page helpful?