LangGraph vs Semantic Kernel: The One Decision That Will Shape Your AI Agent Architecture

A technical deep-dive for Python developers building production AI agents — grounded in official docs, no hype.

Source: Growtika on Unsplash

Most comparisons of these two frameworks are already out of date. Both LangGraph and Semantic Kernel hit major milestones in the last six months. Here is what actually changed, what the code looks like today, and which one you should reach for.

Why This Comparison Matters Right Now

Here is the honest reality of building Python AI agents in 2026: you have two genuinely good framework choices, and picking the wrong one for the wrong problem will cost you architecture refactors, not just a few hours of code changes.

LangGraph and Semantic Kernel have both crossed major milestones since most popular comparisons were written:

  • LangGraph hit v1.0 in October 2025 — with a formal stability commitment and no breaking changes until 2.0 [1][2]
  • LangChain 1.0’s create_agent now runs on the LangGraph runtime underneath, making LangGraph the execution engine of the LangChain ecosystem [2]
  • Semantic Kernel shipped first-class MCP support for Python in v1.28.1 — SK can now act as both an MCP client and server natively in the SDK [7]

If you are still reading comparisons that call LangGraph “unstable” or Semantic Kernel “too tied to .NET”, you are reading old content.

This post is grounded in the official LangGraph docs [1][4], the official Semantic Kernel docs [5][6], and both framework changelogs.

TL;DR: The One-Line Decision Rule

If your problem isThen use this

Stateful, durable, resumable agent workflows with explicit control: LangGraph
Protocol-first, plugin-composed, interoperable agent platforms: Semantic Kernel

That distinction explains every trade-off in this article.

Architecture: Two Very Different Mental Models

LangGraph — The Graph Runtime

LangGraph models your agent system as a stateful graph where you explicitly define state, nodes, and edges. Nodes are Python callables or subgraphs. Edges are transitions. State is a typed object that flows through the graph and gets updated at each step.

That is not an internal implementation detail — it is the primary abstraction you work with every day.

The official LangGraph v1 docs [1] describe the framework around three core ideas: durable execution, controllability, and human-in-the-loop. Resuming a workflow from the last checkpoint after a crash, inserting a human review step, or branching into parallel sub-agents are first-class operations — not workarounds.

Since LangGraph v1, LangChain’s create_agent lives on top of this runtime [2]. The stack now has a clean separation:

  • Start with create_agent for standard tool-calling loops
  • Drop down to raw LangGraph when you need explicit workflow topology

Semantic Kernel — The Kernel-Plugin Middleware

Semantic Kernel starts from the Kernel abstraction, which holds AI services, plugins, and functions. Plugins are groups of functions exposed to the model and to agents, and can come from native Python code, prompt templates, or imported external schemas.

The official SK agent-functions docs [5] state:

“Any Plugin available to an Agent is managed within its respective Kernel instance — this enables each Agent to access distinct functionalities based on its specific role.”

Orchestration emerges from agents choosing functions and planners sequencing capability calls — rather than from a graph topology you define up front.

This makes Semantic Kernel feel more like AI middleware. You shape what your agent can do, then let function calling and the agent framework decide how to do it.

Architectural Difference — Quick Reference

Code: The Same Agent in Both Frameworks

To make the architectural differences concrete, let us build the same agent in both: a multi-turn weather assistant with memory and a system prompt.

LangGraph — Weather Agent with Checkpointing

Pattern from the official LangGraph agents quickstart [4]

# pip install -U langgraph "langchain[openai]"
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
from langchain.chat_models import init_chat_model
# --- Tool: plain Python function ---
def get_weather(city: str) -> str:
"""Get the current weather for a given city."""
return f"It's sunny and 28°C in {city}."
# --- LLM ---
model = init_chat_model("openai:gpt-4o-mini", temperature=0)
# --- Checkpointer: swap for SqliteSaver or PostgresSaver in production ---
checkpointer = InMemorySaver()
# --- Compile graph agent ---
agent = create_react_agent(
model=model,
tools=[get_weather],
prompt="You are a helpful weather assistant.",
checkpointer=checkpointer,
)
# --- thread_id binds this conversation to a persistent checkpoint ---
config = {"configurable": {"thread_id": "user-session-1"}}
# Turn 1
response = agent.invoke(
{"messages": [{"role": "user", "content": "What is the weather in Mumbai?"}]},
config=config,
)
print(response["messages"][-1].content)
# Turn 2 - agent remembers context automatically via checkpointer
followup = agent.invoke(
{"messages": [{"role": "user", "content": "How about Delhi?"}]},
config=config,
)
print(followup["messages"][-1].content)

What is happening architecturally:

  • create_react_agent compiles a StateGraph with a model-tool loop under the hood
  • The checkpointer persists state at every step; the same thread_id resumes from the last saved state automatically
  • If the process crashes mid-run, restarting and invoking with the same thread_idpicks up from the last checkpoint — durability is a runtime concern, not your concern [3]
Source : created by Karan Gehlod using mermaid

Semantic Kernel — Weather Agent with Plugin

Pattern from the official SK agent-functions docs [5]

# pip install semantic-kernel
import asyncio
from semantic_kernel import Kernel
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import (
OpenAIChatCompletion,
OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.functions import kernel_function
from semantic_kernel.contents import ChatHistory
# --- Plugin: class with @kernel_function decorators ---
class WeatherPlugin:
@kernel_function(name="get_weather", description="Get the weather for a city.")
def get_weather(self, city: str) -> str:
return f"It's sunny and 28°C in {city}."
# --- Kernel: holds services and plugins ---
kernel = Kernel()
kernel.add_service(OpenAIChatCompletion(ai_model_id="gpt-4o-mini"))
# --- Execution settings: enable auto function calling ---
settings = OpenAIChatPromptExecutionSettings()
settings.function_choice_behavior = FunctionChoiceBehavior.Auto()
# --- Register plugin ---
kernel.add_plugin(WeatherPlugin(), plugin_name="WeatherPlugin")
# --- Agent: kernel + instructions ---
agent = ChatCompletionAgent(
kernel=kernel,
name="WeatherAssistant",
instructions="You are a helpful weather assistant.",
)
async def run_agent():
history = ChatHistory()
# Turn 1
history.add_user_message("What is the weather in Mumbai?")
async for message in agent.invoke(history):
print(f"Agent: {message.content}")
history.add_message(message)
# Turn 2
history.add_user_message("How about Delhi?")
async for message in agent.invoke(history):
print(f"Agent: {message.content}")
history.add_message(message)
asyncio.run(run_agent())

What is happening architecturally:

  • The Kernel holds the AI service and plugins as a dependency container
  • @kernel_function decorators make Python methods discoverable and invocable by the model automatically
  • FunctionChoiceBehavior.Auto() tells the model to call functions when needed
  • Memory is a ChatHistory object you manage and pass into each invocation — the runtime does not persist it for you [5]

The Most Revealing Difference in 6 Lines

# LangGraph — runtime owns durability
checkpointer = InMemorySaver()
config = {"configurable": {"thread_id": "session-1"}}
agent.invoke(messages, config) # resumes from last checkpoint automatically
# Semantic Kernel - you own state
history = ChatHistory()
history.add_user_message("...")
agent.invoke(history) # you pass and maintain state explicitly

In LangGraph, durability is a runtime concern. In Semantic Kernel, state management is your concern. Neither is wrong — they match different application models.

Protocol Support: MCP and A2A

This is where Semantic Kernel has made its most significant leap recently.

Semantic Kernel — Native MCP in the Python SDK

The official SK MCP announcement [7] states:

“Python support for MCP has arrived… SK Python can act as both an MCP Host and an MCP Server, support multiple transport methods (stdio, SSE, WebSocket), chain multiple MCP servers together, and expose SK functions or agents as MCP servers.”

That is not an adapter or community plugin. It is first-class SDK support from v1.28.1+. For teams building tools and agents that need to cross service boundaries via a standard protocol, this is a meaningful architectural upgrade.

LangGraph — Strong MCP at the Deployment Edge

LangGraph’s MCP story is more about deployment than in-process integration. When deployed on the LangGraph Platform, every agent is automatically exposed as an MCP-accessible endpoint at /mcp with no extra code required. For self-hosted deployments, integration is available via the langchain-mcp-adapters package.

Bottom line:

  • SK is stronger when you want MCP semantics inside your Python process
  • LangGraph is stronger when you think of agents as deployed services that other clients consume via MCP

Stability and Breaking Changes: The 2026 Reality

LangGraph v1 (October 2025): The official v1 release notes [1] state that the core graph APIs and execution model are unchanged. The main migration note is deprecation of create_react_agent in langgraph.prebuilt in favour of LangChain's create_agent. The LangGraph 1.0 announcement [2] explicitly commits to no breaking changes until 2.0.

Semantic Kernel 1.x: Most architectural disruption landed at 1.0 (namespace reorg, API renames, context variable changes). The H1 2025 SK roadmap [8] and subsequent releases show an incremental, additive pattern with targeted fixes rather than structural breaks.

The old narrative of “LangGraph breaks every release” is no longer accurate. Both frameworks are now in a stability-first phase.

Updated Technical Ratings (April 2026)

Based on official docs and both frameworks’ current stable releases:

The scores are intentionally close. Both are production-grade frameworks solving real problems well. The winner for your team is whichever abstraction maps better to how you think about the problem you are solving.

When to Choose Which

✅ Choose LangGraph when:

  • Your agent logic involves non-trivial branching, retries, human review, or approval steps that benefit from explicit graph topology
  • You need durable execution — workflows that survive crashes, resume from checkpoints, and have auditable step history [3]
  • You are already invested in the LangChain ecosystem and want the clean create_agent → LangGraph stack with a clear upgrade path [2]
  • You want fine-grained observability into how execution moved through a workflow at the node level

✅ Choose Semantic Kernel when:

  • You are building a platform or SDK where capabilities are composed as plugins and different agents consume different tool surfaces [6]
  • MCP or A2A interoperability is a core requirement and you want it natively in the Python SDK, not via adapters [7]
  • Your team already uses a DI/service-oriented architecture and the kernel-plugin model maps naturally to it
  • You want lightweight deployment without a dedicated orchestration runtime and can manage state externally

The One-Line Rule — Revisited

If your agent needs to behave like a durable state machine, use LangGraph.
If your agent needs to behave like a
protocol-aware platform component, use Semantic Kernel.

That is the comparison most blog posts are not making. Hopefully this one was useful.

References

[1] LangGraph v1 release notes — https://docs.langchain.com/oss/python/releases/langgraph-v1

[2] LangChain + LangGraph 1.0 announcement — https://blog.langchain.com/langchain-langgraph-1dot0/

[3] LangGraph durable execution docs — https://langchain-ai.github.io/langgraph/how-tos/persistence/

[4] LangGraph agents quickstart — https://langchain-ai.github.io/langgraph/agents/agents/

[5] Semantic Kernel agent-functions docs — https://learn.microsoft.com/en-us/semantic-kernel/frameworks/agent/agent-functions

[6] Semantic Kernel plugins docs — https://learn.microsoft.com/en-us/semantic-kernel/concepts/plugins/

[7] Semantic Kernel MCP Python support — https://devblogs.microsoft.com/semantic-kernel/semantic-kernel-adds-model-context-protocol-mcp-support-for-python/

[8] Semantic Kernel H1 2025 roadmap — https://devblogs.microsoft.com/semantic-kernel/semantic-kernel-roadmap-h1-2025-accelerating-agents-processes-and-integration/


LangGraph vs Semantic Kernel: The One Decision That Will Shape Your AI Agent Architecture was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top