Skip to content
Explained: What is LangChain? How to Use LangChain Chains?

The Intricacies of LangChain Chains: Unleashing the Potential of Multi-Model Language Learning Solutions

Updated on

LangChain is a captivating tool that is making a significant impact in the dynamic sphere of language learning models (LLMs). In this blog post, we will explore the linchpin of this groundbreaking tool - LangChain Chains. What are they? How do they work? And why are they so crucial to LangChain?

What is LangChain? What is LangChain AI?

Before we dive deep into LangChain Chains, let's understand LangChain itself. LangChain is a robust library designed to streamline interaction with several large language models (LLMs) providers like OpenAI, Cohere, Bloom, Huggingface, and more. LangChain's unique proposition is its ability to create Chains, which are logical links between one or more LLMs. This characteristic is what provides LangChain with its considerable utility.

You can read more about LangChain with our LangChain Introduction Article.

What are LangChain Chains?

Chains are the vital core of LangChain. These logical connections between one or more LLMs are the backbone of LangChain's functionality. Chains can range from simple to complex, contingent on the necessities and the LLMs involved. Let's delve deeper into both types:

Basic Chains

A basic chain is the simplest form of a chain that can be crafted. It involves a single LLM receiving an input prompt and using that prompt for text generation.

Take, for instance, this simple scenario. We construct a basic chain using Huggingface as our LLM provider with the following code:

from langchain.prompts import PromptTemplate
from langchain.llms import HuggingFace
from langchain.chains import LLMChain
prompt = PromptTemplate(
    template="Describe a perfect day in {city}?",
llm = HuggingFace(
llmchain = LLMChain(llm=llm, prompt=prompt)"Paris")

The output will be an AI-generated text describing a perfect day in Paris.

Advanced Chains

Advanced chains, also known as utility chains, are made up of multiple LLMs to address a particular task. A suitable example is the SummarizeAndTranslateChain, which is aimed at tasks like summarization and translation.

For instance, LangChain features a specific utility chain named TopicModellingChain, which reads articles and generates a list of relevant topics.

from langchain.chains import TopicModellingChain
topic_chain = TopicModellingChain(llm=llm, verbose=True)"Text of a scientific article about climate change.")

The output here gives a list of topics relevant to the provided scientific article about climate change.

The Strength of LangChain Chains

What makes LangChain Chains so pivotal?

LangChain Chains empower users to effortlessly link various LLMs, amalgamating the strengths of different models and facilitating the execution of more complex and sophisticated tasks. They augment the abilities of traditional single-model arrangements, paving the way for inventive solutions to intricate problems.

LangChain Chains represent a substantial advancement in the LLM world. By bridging the gaps between models in an accessible way, they could become an invaluable tool for developers worldwide, from hobbyists to enterprise-level professionals. With the power of LangChain Chains, there is no limit to what language learning models can achieve.

Chains Without LLMs and Prompts

While the PalChain we discussed before requires an LLM (and a corresponding prompt) to parse the user's question written in natural language, there exist chains in LangChain that don't need one. These are mainly transformation chains that preprocess the prompt, such as removing extra spaces, before inputting it into the LLM. For instance:

from langchain.chains import TrimSpacesChain
trim_chain = TrimSpacesChain()
trimmed_prompt ="   What is the   weather   like?   ")
print(trimmed_prompt)  # Outputs: "What is the weather like?"

In this case, TrimSpacesChain is utilized to clean the prompt by eliminating unnecessary spaces. After the preprocessing, you can feed the cleaned prompt into any LLM of your choice.

Crafting Chains: The SimpleSequentialChain

Now, let's focus on a specific chain type: the SimpleSequentialChain. This chain type allows us to logically link LLMs such that the output of one serves as the input for the next.

Consider a scenario where we first need to clean the prompt (remove extra whitespaces, shorten it, etc.), then use this cleaned prompt to call an LLM. Let's use TrimSpacesChain from above and combine it with an LLM Chain.

from langchain.chains import SimpleSequentialChain
trim_chain = TrimSpacesChain()
llm_chain = LLMChain(llm=llm, prompt=prompt)
combined_chain = SimpleSequentialChain([trim_chain, llm_chain])
output ="   What is the   weather   like?   ")
print(output)  # Outputs AI-generated text based on the cleaned prompt

In this setup, the output from TrimSpacesChain (cleaned prompt) is passed as input to the LLMChain.

The Role of Agents in LangChain

Agents in LangChain present an innovative way to dynamically call LLMs based on user input. They not only have access to an LLM but also a suite of tools (like Google Search, Python REPL, math calculator, weather APIs, etc.) that can interact with the outside world.

Let's consider a practical example using LangChain's ZERO_SHOT_REACT_DESCRIPTION agent:

from langchain.agents import initialize_agent, AgentType, load_tools
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["pal-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
response ="If my age is half of my dad's age and he is going to be 60 next year, what is my current age?")
print(response)  # Outputs: "My current age is 29.5 years old."

In this case, the agent leverages the pal-math tool and an OpenAI LLM to solve a math problem embedded in a natural language prompt. It demonstrates a practical case where the agent brings additional value by understanding the prompt, choosing the correct tool to solve the task, and eventually returning a meaningful response.

With the dynamic capabilities of chains and agents in LangChain, users can flexibly design multi-step language processing workflows. Whether you want to preprocess prompts, create multi-LLM chains, or use agents to dynamically choose LLMs and tools, LangChain provides the building blocks to make it happen.

Advanced Use Case: Generate Movie Recommendations based on User's Favorite Genres

For this use case, we’ll be working with two chains:

Chain #1 — An LLM chain that asks the user about their favorite movie genres. Chain #2 — Another LLM chain that uses the genres from the first chain to recommend movies from the genres selected.

Firstly, let's create the chain_one.

template = '''You are a movie genre survey. Ask the user about their favorite genres.
prompt_template = PromptTemplate(input_variables=[], template=template)
chain_one = LLMChain(llm=llm, prompt=prompt_template)

Then, we create chain_two to recommend movies.

template = '''You are a movie recommender. Given the user's favorite genres: {genres}, suggest some movies that fall under these genres.
Suggest movies:'''
prompt_template = PromptTemplate(input_variables=["genres"], template=template)
chain_two = LLMChain(llm=llm, prompt=prompt_template)

Next, let’s combine these chains using SimpleSequentialChain.

from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(
    chains=[chain_one, chain_two],

You'll notice that we haven't explicitly mentioned input_variables and output_variables for SimpleSequentialChain. This is because the underlying assumption is that the output from chain 1 is passed as input to chain 2.

Now, let's run this chain:'What are your favorite movie genres?')
# > Entering new SimpleSequentialChain chain...
# > Entering new LLMChain...
# Favorite genres: Action, Drama, Comedy
# > Finished chain.
# Given your favorite genres: Action, Drama, Comedy, some movie recommendations would be:
# From Action: Die Hard, Mad Max: Fury Road, The Dark Knight
# From Drama: The Shawshank Redemption, Schindler's List, Forrest Gump
# From Comedy: Superbad, The Hangover, Bridesmaids
# > Finished chain.

The first LLMChain asks for the favorite genres and the second uses this input to generate movie recommendations. This is a great example of how you can combine multiple LLMChains in a SimpleSequentialChain to create a personalized movie recommendation system.

Now, let's say you also wanted to get some additional information from the user, like their preferred movie length. We can update the chain_two to take another input variable, movie_length.

template = '''You are a movie recommender. Given the user's favorite genres: {genres} and their preferred movie length: {movie_length}, suggest some movies that fall under these genres.
Suggest movies:'''
prompt_template = PromptTemplate(input_variables=["genres", "movie_length"], template=template)
chain_two = LLMChain(llm=llm, prompt=prompt_template)

Finally, we combine the chains using SequentialChain which allows us to handle multiple inputs and outputs.

overall_chain = SequentialChain(
    memory=SimpleMemory(memories={"movie_length": "2 hours"}),
    chains=[chain_one, chain_two],

In this scenario, we're using SimpleMemory to store the additional context 'movie_length' which should not change between prompts. When we run this chain, it will suggest movies not only based on the favorite genres, but also consider the preferred movie length.



I encourage you to experiment and play around with LangChain. The possibilities are limitless when it comes to chaining together different AI agents and models, to create even more powerful and useful applications!"


  1. What is LangChain? LangChain is a Python library that allows you to create and chain together different AI models, agents, and prompts in a structured way. It's perfect for creating complex AI applications where you need to interact with multiple models in a sequence.

  2. What is SimpleSequentialChain and SequentialChain in LangChain? Both are types of chains in LangChain, they allow chaining together different AI entities (models, agents, etc.). SimpleSequentialChain is used when there is a single input and single output between chains, while SequentialChain is used when there are multiple inputs and outputs.

  3. How can I add additional context using LangChain? You can use SimpleMemory to add additional context in LangChain. It's an easy way to store context or other bits of information that shouldn’t ever change between prompts.

  4. Can I create custom agents in LangChain? Yes, you can create custom agents using the Agent class in LangChain. Agents can encapsulate complex logic and be a part of a chain.

  5. Where can I find more information about LangChain? You can find more information about LangChain on its official documentation website. It's an open-source project, so you can also check out its source code on GitHub.