PlanExeOrg/PlanExe

每日信息看板 · 2026-03-01
开源项目
Category
github_search
Source
0
Score
2026-03-01T01:40:29Z
Published

AI 总结

PlanExe <p align="center"> <img src="docs/planexe-humanoid-factory.gif?raw=true" alt="PlanExe - Turn your idea into a comprehensive plan in minutes, not mont…
#GitHub #repo #开源项目

内容摘录

PlanExe

<p align="center">
 <img src="docs/planexe-humanoid-factory.gif?raw=true" alt="PlanExe - Turn your idea into a comprehensive plan in minutes, not months." width="700">
</p>

<p align="center">
 <strong>Turn your idea into a comprehensive plan in minutes, not months.</strong>
</p>

<p align="center">
 <strong>PlanExe is the premier planning tool for AI agents.</strong>
</p>

<p align="center">
 <a href="https://home.planexe.org/"><strong>Create an account</strong></a> &nbsp;|&nbsp;
 <a href="https://app.mach-ai.com/planexe_early_access"><strong>Generate a free plan</strong></a> &nbsp;|&nbsp;
 <a href="https://docs.planexe.org/getting_started/"><strong>Getting started guide</strong></a>
</p>

---
Example plans generated with PlanExe
A business plan for a Minecraft-themed escape room.
A business plan for a Faraday cage manufacturing company.
A pilot project for a Human as-a Service.
See more examples here.
What is PlanExe?

PlanExe is an open-source tool and the premier planning tool for AI agents. It turns a single plain-english goal statement into a 40-page, strategic plan in ~15 minutes using local or cloud models. It's an accelerator for outlines, but no silver bullet for polished plans.

Typical output contains:
Executive summary
Gantt chart
Governance structure
Role descriptions
Stakeholder maps
Risk registers
SWOT analyses

PlanExe produces well-structured, domain-aware output: correct terminology, logical task sequencing, and coherent sections. For technical topics (engineering programs, regulated industries), it often gets the vocabulary and structure right. Think of it as a first-draft scaffold that gives you something concrete to critique and refine.

However, the output has consistent weaknesses that matter: budgets are assumed rather than derived, timeline estimates are not grounded in real resource constraints, risk mitigations tend toward generic advice, and legal/regulatory details are plausible-sounding but unverified. The output should be treated as a structured starting point, not a deliverable. How much work it saves depends heavily on the project. For brainstorming or a first outline, it can save hours. For a client-ready plan, expect significant rework on every number, timeline, and risk section.

---
Model Context Protocol (MCP)

PlanExe exposes an MCP server for AI agents at https://mcp.planexe.org/

Assuming you have an MCP-compatible client (Claude, Cursor, Codex, LM Studio, Windsurf, OpenClaw, Antigravity).

The Tool workflow (tools-only, not MCP tasks protocol)
example_plans (optional, preview what PlanExe output looks like)
example_prompts
model_profiles (optional, helps choose model_profile)
non-tool step: draft/approve prompt
plan_create
plan_status (poll every 5 minutes until done)
optional if failed: plan_retry
download the result via plan_download or via plan_file_info

Concurrency note: each plan_create call returns a new plan_id; server-side global per-client concurrency is not capped, so clients should track their own parallel plans.
Option A: Remote MCP (fastest path)
Prerequisites
An account at https://home.planexe.org.
Sufficient funds to create plans.
A PlanExe API key (pex_...) from your account

Use this endpoint directly in your MCP client:
Option B: Remote MCP + local downloads via proxy (mcp_local)

If you want artifacts saved directly to your disk from your MCP client, run the local proxy:
Option C: Run MCP server locally with Docker
Prerequisites
Docker
OpenRouter account
Create a PlanExe .env file with OPENROUTER_API_KEY.

Start the full stack:

Make sure that you can create plans in the web interface, before proceeding to MCP.

Then connect your client to:
http://localhost:8001/mcp

For local docker defaults, auth is disabled in docker-compose.yml.
Local file downloads via proxy (mcp_local)

If you want artifacts saved directly to your disk from your MCP client, run the local proxy:
MCP docs
Setup overview: https://docs.planexe.org/mcp/mcp_setup/
Tool details and flow: https://docs.planexe.org/mcp/mcp_details/
MCP Inspector guide: https://docs.planexe.org/mcp/inspector/
Cursor setup: https://docs.planexe.org/mcp/cursor/
Codex setup: https://docs.planexe.org/mcp/codex/
PlanExe MCP interface: https://docs.planexe.org/mcp/planexe_mcp_interface/
MCP Registry publishing metadata (server.json): mcp_cloud/server.json
llms.txt: https://mcp.planexe.org/llms.txt

---

<details>
<summary><strong> Run locally with Docker (Click to expand)</strong></summary>

<br>

**Prerequisite:** Docker with Docker Compose installed; you only need basic Docker) knowledge. No local Python setup is required because everything runs in containers.
Quickstart: single-user UI + worker (frontend_single_user + worker_plan)
Clone the repo and enter it:
Provide an LLM provider. Copy .env.docker-example to .env and fill in OPENROUTER_API_KEY with your key from OpenRouter. The containers mount .env and llm_config/; pick a model profile there. For host-side Ollama, use the docker-ollama-llama3.1 entry and ensure Ollama is listening on http://host.docker.internal:11434.
Start the stack (first run builds the images):

 The worker listens on http://localhost:8000 and the UI comes up on http://localhost:7860 after the worker healthcheck passes.
Open http://localhost:7860 in your browser. Optional: set PLANEXE_PASSWORD in .env to require a password. Enter your idea, click the generate button, and watch progress with:

 Outputs are written to run/ on the host (mounted into both containers).
Stop with Ctrl+C (or docker compose down). Rebuild after code/dependency changes:

For compose tips, alternate ports, or troubleshooting, see docs/docker.md or docker-compose.md.
Configuration

**Config A:** Run a model in the cloud using a paid provider. Follow the instructions in OpenRouter.

**Config B:** Run models locally on a high-end computer. Follow the instructions for either Ollama or LM Studio. When using host-side tools with Docker, point the model URL at the host (for example h…