MCP Diagram Tools: draw.io and Mermaid for AI-Generated Architecture Diagrams
How to use draw.io MCP and mcp-mermaid to generate architecture diagrams, flowcharts, and sequence diagrams from natural language prompts — and embed them directly in your blog or documentation.
Technical writing without diagrams is like code without tests — it works, but nobody trusts it. The problem is that creating diagrams is slow. You open a drawing tool, drag shapes, align arrows, fight with layout engines, and twenty minutes later you have a flowchart that will be outdated by next sprint.
Two MCP (Model Context Protocol) servers change this: draw.io MCP and mcp-mermaid. Both let you describe a diagram in natural language and get a rendered result in seconds. This post covers what each tool does, how to set them up, when to use which, and a workflow for embedding diagrams directly into blog posts and documentation.
What Is MCP and Why Does It Matter for Diagrams?
MCP is an open protocol that lets AI assistants call external tools. Instead of the AI generating a text description of a diagram and hoping you can visualize it, MCP lets the AI directly invoke a diagram renderer and hand you the output — an SVG, a PNG, an editable .drawio file, or a live editor URL.
The two servers covered here take different approaches:
| draw.io MCP | mcp-mermaid | |
|---|---|---|
| Maintainer | JGraph (official draw.io team) | Community (hustcc) |
| Output | .drawio files, SVG, PNG, PDF, browser editor URLs | SVG, PNG, base64 images, mermaid.ink URLs |
| Diagram format | draw.io XML (mxGraph), CSV, or Mermaid input | Mermaid syntax only |
| Shape library | 10,000+ shapes (AWS, Azure, GCP, Cisco, K8s, BPMN) | Standard Mermaid shapes |
| Editability | Full draw.io editor — drag, resize, restyle | Edit the Mermaid source and re-render |
| Best for | Complex architecture diagrams, cloud infra, polished visuals | Quick flowcharts, sequences, ERDs, Gantt charts |
Setting Up draw.io MCP
The draw.io MCP server ships as an npm package (@drawio/mcp) and offers four integration modes. For Claude Code, the Skill + CLI approach is the most practical.
Option 1: Skill File (Recommended for Claude Code)
mkdir -p ~/.claude/skills/drawio
curl -sL https://raw.githubusercontent.com/jgraph/drawio-mcp/main/skill-cli/drawio/SKILL.md \
-o ~/.claude/skills/drawio/SKILL.md
This installs a skill that Claude Code can invoke with /drawio. It generates native .drawio files and can export to PNG, SVG, or PDF if you have draw.io Desktop installed.
# Example usage in Claude Code
/drawio svg architecture diagram for a Kafka-based data pipeline
/drawio png class diagram for the models in src/
/drawio sequence diagram for OAuth2 authorization code flow
Option 2: MCP Server (For Claude Code or Claude Desktop)
This is what I use. One command adds it to your project:
# Add to current project
claude mcp add drawio -- npx -y @drawio/mcp
# Or add globally (available in all projects)
claude mcp add --scope user drawio -- npx -y @drawio/mcp
This writes to .claude.json (project) or ~/.claude.json (global). You can also edit the config directly:
{
"mcpServers": {
"drawio": {
"command": "npx",
"args": ["-y", "@drawio/mcp"]
}
}
}
Restart Claude Code after adding — MCP servers are loaded at startup.
The MCP server exposes three tools:
open_drawio_xml— opens a diagram from draw.io XML in the browser editoropen_drawio_csv— converts tabular CSV data into a diagram (great for org charts)open_drawio_mermaid— converts Mermaid syntax and opens it in draw.io
Option 3: Remote MCP (For Claude.ai — Zero Install)
Add https://mcp.draw.io/mcp as a remote MCP server in Claude.ai settings. This gives you inline diagram rendering directly in the chat window with an interactive viewer — zoom, pan, layers, and a button to open in the full draw.io editor.
draw.io Capabilities Worth Knowing
The draw.io MCP includes a shape search tool that indexes 10,000+ shapes across all draw.io libraries. When you need AWS icons, Kubernetes primitives, or Cisco network symbols, the AI can search for the exact shape and use its style string in the diagram XML. This is what makes draw.io diagrams look professional — they use the real vendor icons, not generic boxes.
Exported .drawio.svg and .drawio.png files embed the full diagram XML. You can share a PNG in a Slack thread, and anyone can open it in draw.io to get the editable source. This is a genuinely useful feature for team collaboration.
Setting Up mcp-mermaid
mcp-mermaid is a lighter-weight option that focuses exclusively on Mermaid diagram rendering.
Installation
# Add to current project
claude mcp add mcp-mermaid -- npx -y mcp-mermaid
# Install Playwright's Chromium (required for local rendering)
npx playwright install chromium
Or edit the config directly:
{
"mcpServers": {
"mcp-mermaid": {
"command": "npx",
"args": ["-y", "mcp-mermaid"]
}
}
}
Restart Claude Code after adding.
Heads up: The first run downloads Chromium (~200 MB) via Playwright. This is a one-time cost — the rendering engine uses headless Chromium to produce pixel-perfect output. If you see a “Executable doesn’t exist” error after installing, run npx playwright install chromium manually and restart your Claude Code session — the MCP server process caches the Playwright path at startup.
The Single Tool: generate_mermaid_diagram
mcp-mermaid exposes one tool with several output modes:
| Output Type | What You Get |
|---|---|
base64 | PNG image inline in chat (default) |
svg | Raw SVG markup |
file | PNG saved to disk with a timestamped filename |
svg_url | Public URL via mermaid.ink (no local rendering needed) |
png_url | Public URL via mermaid.ink |
mermaid | Echo back the source (useful for validation) |
You can also set a theme (default, dark, forest, neutral, base) and a background color.
Docker Alternative
If you prefer not to install Chromium locally:
docker run -p 3033:3033 susuperli/mcp-mermaid:latest --transport sse
Then configure the MCP client to connect via SSE at http://localhost:3033.
When to Use Which
Here is the decision framework I use:
graph TD
START["Need a diagram"] --> Q1{"Need vendor icons?<br/>AWS, K8s, Cisco, etc."}
Q1 -->|Yes| DRAWIO["Use draw.io MCP"]
Q1 -->|No| Q2{"Need pixel-perfect<br/>layout control?"}
Q2 -->|Yes| DRAWIO
Q2 -->|No| Q3{"Embedding in<br/>markdown/docs?"}
Q3 -->|Yes| MERMAID["Use mcp-mermaid<br/>or write Mermaid directly"]
Q3 -->|No| Q4{"Quick sketch or<br/>polished deliverable?"}
Q4 -->|Quick| MERMAID
Q4 -->|Polished| DRAWIO
Use draw.io MCP when:
- You need cloud architecture diagrams with real AWS/GCP/Azure icons
- The diagram will be presented to stakeholders or included in design docs
- You want an editable artifact that non-technical people can modify in draw.io
- You need network topology, BPMN, or domain-specific notation
Use mcp-mermaid when:
- You are writing a blog post or README and want inline diagrams
- The diagram is a flowchart, sequence diagram, ERD, or Gantt chart
- You want the diagram source to live in version control as text
- Speed matters more than visual polish
Use both when you are exploring an architecture — start with Mermaid for quick iteration, then re-create the final version in draw.io for the design document.
Practical Workflow: Diagrams in Blog Posts
Here is the workflow I use for this blog, which runs on Astro with client-side Mermaid rendering.
For Mermaid Diagrams
The simplest path: write Mermaid syntax directly in your markdown. If your blog already renders Mermaid code blocks (Astro, Hugo, Docusaurus, and most modern SSGs support this), you just need the syntax.
Use mcp-mermaid or Claude’s native Mermaid knowledge to generate the syntax:
“Generate a Mermaid sequence diagram showing how a CI/CD pipeline triggers a canary deployment, monitors error rates, and rolls back if the threshold is exceeded.”
Claude generates:
sequenceDiagram
participant Dev as Developer
participant CI as CI Pipeline
participant K8s as Kubernetes
participant Mon as Monitoring
participant LB as Load Balancer
Dev->>CI: Push to main
CI->>CI: Build + Test
CI->>K8s: Deploy canary (10% traffic)
K8s->>LB: Route 10% to canary
loop Every 30s for 5 min
Mon->>K8s: Check error rate
alt Error rate > 1%
Mon->>K8s: Rollback canary
K8s->>LB: Route 100% to stable
Mon->>Dev: Alert — rollback triggered
end
end
Mon->>K8s: Promote canary to stable
K8s->>LB: Route 100% to new version
Mon->>Dev: Deployment successful
Paste that into your markdown file. Done. The source is versioned, diffable, and editable by anyone on the team.
For draw.io Diagrams
When you need a polished architecture diagram:
- Use the draw.io MCP tool — it generates draw.io XML and opens the editor in your browser
- In the draw.io editor: File → Export As → SVG (check “Embed diagram” to keep it editable)
- Save the
.drawio.svgto your blog’spublic/images/blog/directory - Reference in markdown:

You can also save the raw .drawio XML file alongside the export — this gives you the editable source. The examples in this post have their .drawio files at /images/blog/2026/aws-data-platform.drawio and /images/blog/2026/realtime-data-platform.drawio. Open either URL in draw.io to edit them.
Comparison: The Same Diagram Both Ways
Here is a simple data pipeline rendered as Mermaid (text in markdown):
graph LR
A["Data Source"] --> B["Kafka"]
B --> C["Stream Processor"]
C --> D["Redis Cache"]
C --> E["PostgreSQL"]
D --> F["API Layer"]
E --> F
F --> G["Dashboard"]
The draw.io version of the same diagram would use proper cloud infrastructure icons, custom colors, shadow effects, and precise layout — but it lives as an image file rather than inline text.
For blog posts, Mermaid wins on maintainability. For architecture review documents, draw.io wins on visual quality.
Effective Prompts for Diagram Generation
The quality of AI-generated diagrams depends on four elements in your prompt: structure (what nodes exist and how they connect), type (which diagram type), layout (direction, spacing, grouping), and context (what the reader needs to understand).
Three examples that follow this pattern:
Mermaid — architecture with grouped layers:
“Create a Mermaid flowchart (graph LR) for a data pipeline. Group related nodes into subgraphs: ‘Ingestion’ containing WebSocket and REST API, ‘Processing’ containing Kafka and Stream Processor, ‘Storage’ containing Redis and PostgreSQL, ‘Serving’ containing API and Dashboard. Draw connections between groups.”
Mermaid — sequence with lifecycle:
“Generate a Mermaid sequence diagram for OAuth2 login. Participants: Browser, API Gateway, Auth Provider, User DB. Show the redirect flow, token exchange, and session creation. Use activate/deactivate to show processing and a loop block for token refresh.”
draw.io — cloud architecture with icons:
“Use draw.io to create an AWS architecture diagram. Show a VPC with two AZs, each with public (ALB) and private (ECS Fargate) subnets. Include RDS Multi-AZ, ElastiCache Redis, and S3. Use official AWS shape library icons. Orthogonal edge routing. Export as SVG.”
Layout Control: A Practical Comparison
Layout is where most AI-generated diagrams fall short. Here is how to fix it with each tool.
Mermaid Layout Controls
Mermaid’s layout is mostly automatic, but you have these levers:
Direction: graph TD (top-down), graph LR (left-right), graph BT (bottom-top), graph RL (right-left)
Subgraphs for spatial grouping:
graph LR
subgraph Ingestion
direction TB
WS["WebSocket"] --> KAFKA["Kafka"]
REST["REST API"] --> KAFKA
end
subgraph Processing
direction TB
SP["Stream Processor"]
AGG["Aggregator"]
end
subgraph Storage
direction TB
REDIS["Redis"]
PG["PostgreSQL"]
end
KAFKA --> SP
KAFKA --> AGG
SP --> REDIS
AGG --> PG
Invisible edges for alignment: When you need two nodes side by side that are not connected, add an invisible edge:
A ~~~ B
Node ordering matters: Mermaid lays out nodes in the order they first appear. If you want Node A above Node B in a top-down graph, define A first.
Prompt for fixing layout:
“The nodes are too bunched up. Restructure the diagram so that the main flow goes left-to-right with graph LR. Group the ingestion nodes in a subgraph on the left, processing in the middle, and storage on the right. Use direction TB inside each subgraph so nodes within a group stack vertically.”
draw.io Layout Controls
draw.io gives you pixel-level control — explicit x, y coordinates on every node, waypoints on edges, and built-in layout algorithms (tree, organic, orthogonal). Prompt with specific positions: “Place the API Gateway at (400, 100), three microservices at y=300 spaced 250px apart.” Use orthogonal edge routing for clean right-angle connections.
Bottom line: Mermaid gives you 80% of layout quality with 20% of the effort. draw.io gives you 100% but requires precise prompting or manual adjustment in the editor.
Real-World Test: The Agentic Ops Diagram
Here is a diagram from my Agentic Ops post, restructured with subgraphs to separate the trigger, diagnosis, and action phases:
graph TD
subgraph trigger [" "]
SLA["Business SLA<br/><i>data freshness < 60s</i>"]
end
subgraph diagnosis ["Diagnosis"]
direction LR
AGENT["AI Agent<br/><i>traverse · diagnose · act</i>"]
DEP["Dependency Tree<br/><i>YAML — human-crafted</i>"]
META["Metadata at Each Node<br/><i>logs · metrics · healthchecks</i>"]
KE["Known-Error Memory<br/><i>pattern → fix</i>"]
end
subgraph action ["Action"]
direction LR
FIX["Automated Fix + Verify"]
REPORT["Report to Human"]
LEARN["Update Known Errors"]
end
SLA -->|"violation detected"| AGENT
AGENT -->|"walks"| DEP
DEP -->|"at each node"| META
META -->|"matches?"| KE
META -->|"novel error"| AGENT
KE -->|"known fix"| FIX
AGENT -->|"LLM reasoning"| FIX
FIX --> REPORT
FIX -->|"new pattern"| LEARN
LEARN -.->|"grows"| KE
What changed and why:
- Subgraphs separate the trigger, diagnosis, and action phases — the reader immediately sees the three-phase structure
direction LRinside subgraphs lays out diagnosis nodes horizontally so they read like a timeline: Agent → Tree → Metadata → Known Errors- Transparent trigger subgraph keeps the SLA visually separated without a visible box
- Color coding distinguishes phases: amber for the trigger, blue-grey for diagnosis, brown for action
The prompt that produces this:
“Reorganize this agentic ops diagram into three phases: Trigger (just the SLA node), Diagnosis (Agent, Dependency Tree, Metadata, Known-Error Memory laid out left-to-right), and Action (Fix, Report, Learn laid out left-to-right). Use subgraphs for each phase. Make the trigger subgraph invisible. Color the diagnosis subgraph blue-grey and the action subgraph warm brown. Keep all the same edges and labels.”
draw.io Version: The Prompt
For draw.io, you would prompt:
“Use draw.io to create an architecture diagram for an agentic ops system. Three horizontal lanes: Trigger (top, amber background), Diagnosis (middle, blue-grey background), Action (bottom, warm brown background). In the Trigger lane: a single node ‘Business SLA — data freshness < 60s’. In the Diagnosis lane, left to right: AI Agent, Dependency Tree, Metadata, Known-Error Memory. In the Action lane: Automated Fix, Report to Human, Update Known Errors. Edges: SLA → Agent, Agent → Dependency Tree → Metadata → Known-Error Memory. Known-Error Memory → Fix (known fix), Metadata → Agent (novel error), Agent → Fix (LLM reasoning). Fix → Report, Fix → Learn, Learn → Known-Error Memory (dashed, feedback loop). Use rounded rectangles, no shadows, orthogonal edges. Space nodes 200px apart.”
The draw.io version gives you:
- Pixel-perfect swimlane alignment
- Orthogonal edge routing (no crossing)
- Consistent node sizes and spacing
- The ability to further adjust in the draw.io editor
The tradeoff: it lives as an image file, not diffable text in your markdown.
Example: AWS Data Platform Architecture
This is where draw.io MCP shines — cloud architecture with vendor icons. I generated this same architecture using both tools to show the difference. The system is an AWS data platform with ECS microservices, Kinesis streaming, Glue ETL, Redshift, and Athena.
Mermaid Version
graph TB
subgraph VPC ["AWS VPC (10.0.0.0/16)"]
subgraph PublicSubnet ["Public Subnets (AZ-1 & AZ-2)"]
ALB["Application<br/>Load Balancer"]
APIGW["API Gateway"]
end
subgraph PrivateSubnet ["Private Subnets — ECS Cluster"]
direction LR
SVC_INGEST["ECS Fargate<br/>Ingestion Service<br/><i>3 tasks</i>"]
SVC_API["ECS Fargate<br/>API Service<br/><i>5 tasks</i>"]
SVC_TRANSFORM["ECS Fargate<br/>Transform Service<br/><i>4 tasks</i>"]
SVC_SCHEDULER["ECS Fargate<br/>Scheduler Service<br/><i>2 tasks</i>"]
end
subgraph DataSubnet ["Data Subnets"]
RDS["RDS PostgreSQL<br/>Multi-AZ<br/><i>metadata store</i>"]
ELASTICACHE["ElastiCache<br/>Redis Cluster<br/><i>hot cache</i>"]
end
end
subgraph Analytics ["Analytics & Warehouse"]
S3RAW["S3 Bucket<br/>Raw Data Lake<br/><i>Parquet / JSON</i>"]
S3CURATED["S3 Bucket<br/>Curated Zone<br/><i>partitioned Parquet</i>"]
GLUE["AWS Glue<br/>ETL Jobs &<br/>Data Catalog"]
REDSHIFT["Amazon Redshift<br/>Serverless<br/><i>analytics warehouse</i>"]
ATHENA["Amazon Athena<br/><i>ad-hoc queries</i>"]
end
subgraph Streaming ["Event Streaming"]
KINESIS["Kinesis Data<br/>Streams<br/><i>real-time events</i>"]
FIREHOSE["Kinesis<br/>Firehose<br/><i>delivery to S3</i>"]
end
subgraph Observability ["Monitoring"]
CW["CloudWatch<br/>Metrics & Alarms"]
XRAY["X-Ray<br/>Distributed Tracing"]
end
APIGW --> ALB
ALB --> SVC_API
ALB --> SVC_INGEST
SVC_INGEST --> KINESIS
KINESIS --> FIREHOSE
FIREHOSE --> S3RAW
KINESIS --> SVC_TRANSFORM
SVC_TRANSFORM --> ELASTICACHE
SVC_TRANSFORM --> RDS
SVC_SCHEDULER --> GLUE
S3RAW --> GLUE
GLUE --> S3CURATED
S3CURATED --> REDSHIFT
S3CURATED --> ATHENA
GLUE --> REDSHIFT
SVC_API --> ELASTICACHE
SVC_API --> RDS
SVC_API --> REDSHIFT
SVC_API -.-> CW
SVC_INGEST -.-> CW
SVC_TRANSFORM -.-> XRAY
This Mermaid version captures the full architecture — VPC with three subnet tiers, four ECS microservices, Kinesis streaming into a Glue/Redshift/Athena analytics stack, and observability via CloudWatch and X-Ray. The color coding maps to AWS service categories: purple for analytics, green for storage, orange for compute.
But notice the limitations: every service is a colored rectangle. There are no AWS icons, no AZ separation, and the layout engine decides node placement. For a blog post explaining concepts, this works. For a design review or architecture document, it falls short.
draw.io Version: What the MCP Produces
I generated this diagram using open_drawio_xml with draw.io’s AWS4 architecture icon set. The raw .drawio file is at /images/blog/2026/aws-data-platform.drawio — open it in draw.io to see the full diagram with AWS icons, then export as SVG or PNG.
The prompt:
“Create an AWS data platform architecture diagram using the draw.io MCP. The VPC contains three subnet groups: public subnets with ALB and API Gateway, private subnets with four ECS Fargate services (Ingestion, API, Transform, Scheduler), and data subnets with RDS PostgreSQL Multi-AZ and ElastiCache Redis. Outside the VPC: Kinesis Data Streams and Firehose for event streaming, an analytics group with S3 raw data lake, S3 curated zone, AWS Glue ETL, Redshift Serverless, and Athena. CloudWatch and X-Ray for monitoring. Use the official AWS4 shape library — VPC group, security group containers, ECS task shapes, RDS PostgreSQL, ElastiCache Redis, Kinesis, Glue, Redshift, Athena, S3 bucket, CloudWatch, and X-Ray icons. Use orthogonal edge routing with color-coded edges: orange for request path, purple for streaming, green for data lake, magenta for database, pink dashed for observability.”
The draw.io version adds:
- Official AWS4 icons — ALB, ECS Task, RDS PostgreSQL, ElastiCache Redis, Kinesis, Firehose, Glue, Redshift, Athena, S3 Bucket, CloudWatch, X-Ray are all instantly recognizable
- Proper VPC/subnet grouping using the AWS group shapes with correct color conventions (green for public, blue for private, red for data subnets)
- Color-coded edges — orange for the request path through ALB, purple for event streaming via Kinesis, green for data lake flows to S3, magenta for database connections, dashed pink for observability
- Pixel-precise layout — services are spaced 200px apart with consistent sizing, making the diagram scannable
- Editable — the generated draw.io link opens in the full editor where non-technical stakeholders can rearrange and annotate
The same information, but the draw.io version is what you put in a design document or architecture review. The Mermaid version is what you put in a blog post or README.
Example: Real-Time Data Platform
Here is the same comparison for a real-time data platform — the kind of system with WebSocket ingestion, stream processing, and multiple storage tiers where low latency matters.
Mermaid Version
graph LR
subgraph Ingestion ["Ingestion Layer"]
direction TB
WS["WebSocket<br/>Connector"]
REST["REST API<br/>Poller"]
PROTO["Protocol<br/>Adapter"]
end
subgraph Messaging ["Message Bus"]
direction TB
K1["Kafka Topic<br/>raw_events"]
K2["Kafka Topic<br/>filtered_events"]
K3["Kafka Topic<br/>derived_metrics"]
end
subgraph Processing ["Stream Processing"]
direction TB
FILT["Filter &<br/>Normalize"]
ENRICH["Enrichment<br/>Service"]
AGG["Aggregation<br/>Engine"]
ANOMALY["Anomaly<br/>Detection"]
end
subgraph Storage ["Storage Layer"]
direction TB
REDIS["Redis + TimeSeries<br/>hot data, sub-second"]
TSDB["TimeSeries DB<br/>warm data 30d"]
PG["PostgreSQL<br/>metadata & config"]
S3["Object Store<br/>cold archive"]
end
subgraph Serving ["Serving Layer"]
direction TB
API["REST API"]
WSOUT["WebSocket<br/>Push"]
DASH["Dashboard"]
end
WS --> K1
REST --> K1
PROTO --> K1
K1 --> FILT
FILT --> K2
K2 --> ENRICH
ENRICH --> K3
K3 --> AGG
K3 --> ANOMALY
AGG --> REDIS
AGG --> TSDB
ANOMALY --> PG
ENRICH --> S3
REDIS --> API
REDIS --> WSOUT
TSDB --> API
PG --> API
API --> DASH
WSOUT --> DASH
Five layers, left to right: data enters through WebSocket/REST/Protocol adapters, flows through Kafka topics, gets filtered, enriched, and aggregated by stream processors, lands in storage tiers based on access pattern (Redis for hot, TimeSeries DB for warm, PostgreSQL for metadata, object store for cold), and is served via REST API and WebSocket push to dashboards.
draw.io Prompt for the Same Architecture
“Create a real-time data platform architecture diagram using draw.io. Five vertical columns, left to right:
Ingestion (blue-grey container): WebSocket Connector, REST API Poller, Protocol Adapter — stacked vertically.
Message Bus (brown container): three Kafka topics stacked vertically — raw_events, filtered_events, derived_metrics. Use Apache Kafka shapes.
Stream Processing (green container): Filter & Normalize, Enrichment Service, Aggregation Engine, Anomaly Detection.
Storage (rose container): Redis TimeSeries for hot data (sub-second), TimeSeries DB for warm data (30d), PostgreSQL for metadata, Object Store for cold archive. Use cylinder shapes for databases.
Serving (amber container): REST API, WebSocket Push, Dashboard.
Data flows left to right: all ingestion nodes → raw_events. raw_events → Filter → filtered_events → Enrichment → derived_metrics → Aggregation and Anomaly. Aggregation → Redis and TSDB. Anomaly → PostgreSQL. Enrichment → Object Store. Redis → API and WebSocket Push. TSDB and PostgreSQL → API. API and WebSocket → Dashboard.
Use orthogonal edge routing. 250px column spacing. Rounded rectangles. No shadows.”
Layout Adjustment Prompts
After the first generation, these prompts refine both versions:
For Mermaid — fixing cramped subgraphs:
“The Processing and Storage subgraphs are too close together. Add an invisible node between them with
Processing ~~~ Storageto increase spacing. Also move Anomaly Detection to be at the same vertical level as Aggregation Engine by defining them on the same line.”
The Verdict
For a blog, Mermaid in markdown is the default — the source lives in your post, is diffable, and renders automatically. Use draw.io MCP when you need a polished architecture diagram with cloud icons — export as SVG, commit the file, reference it as an image.
Beyond draw.io and Mermaid
These two handle most cases, but other tools fill specific gaps. See Beyond Mermaid: PlantUML, D2, and Excalidraw for when each beats Mermaid, with practical examples and layout control techniques.
Making Diagrams Theme-Aware
Two rules handle dark/light mode for all diagram tools:
Mermaid and PlantUML (render at view time): do not hardcode colors in style directives. Configure themeVariables in your renderer to match your site’s design tokens. The renderer detects dark/light mode and applies the right palette automatically. Your diagram source stays clean — just structure and connections.
draw.io, D2, Excalidraw (render at author time): export dark and light SVG variants. Use CSS to show the right one:
.drawio-dark { display: block; }
.drawio-light { display: none; }
.light .drawio-dark { display: none; }
.light .drawio-light { display: block; }
Do not use CSS filter: invert(1) on SVGs — it flips all colors including icons. Do not hardcode colors in Mermaid style directives — they will not adapt when the theme changes.
The Bigger Picture
MCP lets AI assistants reach out and use specialized tools — diagram renderers today, database clients and deployment pipelines tomorrow. For technical writing, the change is practical: diagrams become part of the writing flow instead of a separate tool-switching exercise. The friction drops low enough that you actually include the diagram instead of writing “TODO: add architecture diagram here” and never coming back to it.
Start with Mermaid in your markdown, reach for draw.io when you need vendor icons, and set up both MCP servers. The five minutes of setup pays for itself the first time you need a sequence diagram.