bytedance/deer-flow

每日信息看板 · 2026-02-25
开源项目
Category
github_search
Source
9
Score
2026-02-25T01:42:41Z
Published

AI 总结

字节跳动开源 DeerFlow 2.0(从零重写),作为“超级智能体编排器”集成子代理、长期记忆与沙箱执行,帮助开发者更可靠地构建可落地的多步骤自动化任务流。
#GitHub #repo #开源项目 #DeerFlow #LangGraph #LangChain #MCP #Agent

内容摘录

🦌 DeerFlow - 2.0

DeerFlow (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) is an open-source **super agent harness** that orchestrates **sub-agents**, **memory**, and **sandboxes** to do almost anything — powered by **extensible skills**.

https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18
[!NOTE]
**DeerFlow 2.0 is a ground-up rewrite.** It shares no code with v1. If you're looking for the original Deep Research framework, it's maintained on the 1.x branch — contributions there are still welcome. Active development has moved to 2.0.
Offiical Website

Learn more and see **real demos** on our official website.

**deerflow.tech**

---
Table of Contents
Quick Start
Sandbox Mode
From Deep Research to Super Agent Harness
Core Features
Skills & Tools
Sub-Agents
Sandbox & File System
Context Engineering
Long-Term Memory
Recommended Models
Documentation
Contributing
License
Acknowledgments
Star History
Quick Start
Configuration
**Clone the DeerFlow repository**
**Generate local configuration files**

 From the project root directory (deer-flow/), run:

 

 This command creates local configuration files based on the provided example templates.
**Configure your preferred model(s)**

 Edit config.yaml and define at least one model:
**Set API keys for your configured model(s)**

 Choose one of the following methods:
Option A: Edit the .env file in the project root (Recommended)
Option B: Export environment variables in your shell
Option C: Edit config.yaml directly (Not recommended for production)

 
Running the Application
Option 1: Docker (Recommended)

The fastest way to get started with a consistent environment:
**Initialize and start**:
 

 make docker-start now starts provisioner only when config.yaml uses provisioner mode (sandbox.use: src.community.aio_sandbox:AioSandboxProvider with provisioner_url).
**Access**: http://localhost:2026

See CONTRIBUTING.md for detailed Docker development guide.
Option 2: Local Development

If you prefer running services locally:
**Check prerequisites**:
**(Optional) Pre-pull sandbox image**:
**Start services**:
**Access**: http://localhost:2026
Advanced
Sandbox Mode

DeerFlow supports multiple sandbox execution modes:
**Local Execution** (runs sandbox code directly on the host machine)
**Docker Execution** (runs sandbox code in isolated Docker containers)
**Docker Execution with Kubernetes** (runs sandbox code in Kubernetes pods via provisioner service)

For Docker development, service startup follows config.yaml sandbox mode. In Local/Docker modes, provisioner is not started.

See the Sandbox Configuration Guide to configure your preferred mode.
MCP Server

DeerFlow supports configurable MCP servers and skills to extend its capabilities.
See the MCP Server Guide for detailed instructions.
From Deep Research to Super Agent Harness

DeerFlow started as a Deep Research framework — and the community ran with it. Since launch, developers have pushed it far beyond research: building data pipelines, generating slide decks, spinning up dashboards, automating content workflows. Things we never anticipated.

That told us something important: DeerFlow wasn't just a research tool. It was a **harness** — a runtime that gives agents the infrastructure to actually get work done.

So we rebuilt it from scratch.

DeerFlow 2.0 is no longer a framework you wire together. It's a super agent harness — batteries included, fully extensible. Built on LangGraph and LangChain, it ships with everything an agent needs out of the box: a filesystem, memory, skills, sandboxed execution, and the ability to plan and spawn sub-agents for complex, multi-step tasks.

Use it as-is. Or tear it apart and make it yours.
Core Features
Skills & Tools

Skills are what make DeerFlow do *almost anything*.

A standard Agent Skill is a structured capability module — a Markdown file that defines a workflow, best practices, and references to supporting resources. DeerFlow ships with built-in skills for research, report generation, slide creation, web pages, image and video generation, and more. But the real power is extensibility: add your own skills, replace the built-in ones, or combine them into compound workflows.

Skills are loaded progressively — only when the task needs them, not all at once. This keeps the context window lean and makes DeerFlow work well even with token-sensitive models.

Tools follow the same philosophy. DeerFlow comes with a core toolset — web search, web fetch, file operations, bash execution — and supports custom tools via MCP servers and Python functions. Swap anything. Add anything.
Sub-Agents

Complex tasks rarely fit in a single pass. DeerFlow decomposes them.

The lead agent can spawn sub-agents on the fly — each with its own scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report back structured results, and the lead agent synthesizes everything into a coherent output.

This is how DeerFlow handles tasks that take minutes to hours: a research task might fan out into a dozen sub-agents, each exploring a different angle, then converge into a single report — or a website — or a slide deck with generated visuals. One harness, many hands.
Sandbox & File System

DeerFlow doesn't just *talk* about doing things. It has its own computer.

Each task runs inside an isolated Docker container with a full filesystem — skills, workspace, uploads, outputs. The agent reads, writes, and edits files. It executes bash commands and codes. It views images. All sandboxed, all auditable, zero contamination between sessions.

This is the difference between a chatbot with tool access and an agent with an actual execution environment.
Context Engineering

**Isolated Sub-Agent Context**: Each sub-agent runs in its own isolated context. This means that the sub-agent will not be able to see the context of the main agent or other sub-agents. This is important to ensure that the sub-agent is able to focus on the …