LLM Wrapper to use
Key to use for output, defaults to text
Prompt object to use
Optional
llmKwargs to pass to LLM
Optional
memoryOptional
outputOutputParser to use
Optional
config: (RunnableConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[])[]Use .batch() instead. Will be removed in 0.2.0.
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Call the chain on all inputs in the list
Run the core logic of this chain and add to output if desired.
Wraps _call and handles memory.
Optional
config: BaseCallbackConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]Format prompt with values and pass to LLM
keys to pass to prompt template
Optional
callbackManager: CallbackManagerCallbackManager to use
Completion from LLM.
llm.predict({ adjective: "funny" })
Static
deserializeLoad a chain from a json-like object describing it.
Generated using TypeDoc
Deprecated
This class will be removed in 0.3.0. Use the LangChain Expression Language (LCEL) instead. See the example below for how to use LCEL with the LLMChain class:
Chain to run queries against LLMs.
Example