A Vue 3 renderer specifically built for AI-powered streaming Markdown: Monaco incremental, Mermaid progressive, and KaTeX formula speed, with real-time updates and no jitter, ready to use out of the box.
npx skills add https://github.com/simon-he95/markstream-vue --skill markstream-custom-componentsInstallieren Sie diesen Skill über die CLI und beginnen Sie mit der Verwendung des SKILL.md-Workflows in Ihrem Arbeitsbereich.
Fast, streaming-friendly Markdown rendering for Vue 3 — progressive Mermaid, streaming diff code blocks, and real-time previews optimized for large documents.
Looking for other frameworks?
markstream-vue2 (a baseline port with fewer advanced features)packages/markstream-react at packages/markstream-react/README.md (first-pass port)📖 Detailed docs, API, and advanced usage: https://markstream-vue-docs.simonhe.me/guide/
| If you want to... | Start here | Then go to |
|---|---|---|
| get the first render on screen | Quick Start | Installation guide |
| integrate it into a docs site or VitePress theme | Docs Site & VitePress | Custom Tags & Advanced Components |
| build an AI chat UI or SSE stream | AI Chat & Streaming | Performance |
| replace one built-in renderer | Override Built-in Components | Renderer & Node Components |
add trusted tags such as thinking |
Custom Tags & Advanced Components | API Reference |
| debug a broken integration but do not know why yet | Troubleshooting by Symptom | Troubleshooting |
pnpm play:nuxtIf you want the packaged AI assets without cloning the repo:
npx skills add Simon-He95/markstream-vue
npx markstream-vue skills list
npx markstream-vue skills install
npx markstream-vue prompts list
npx markstream-vue prompts show install-markstream
Recommended usage:
npx skills add Simon-He95/markstream-vue is the primary path for Codex-compatible skill discovery because it reads .agents/skills directly from the GitHub repositoryskills install installs the bundled skills into your agent skill directory (default: ~/.agents/skills)prompts list and prompts show to discover and copy maintained prompt templatesOther npx skills add forms also work:
# Full GitHub URL
npx skills add https://github.com/Simon-He95/markstream-vue
# Direct path to one skill in this repo
npx skills add https://github.com/Simon-He95/markstream-vue/tree/main/.agents/skills/markstream-install
# Any git URL
npx skills add [email protected]:Simon-He95/markstream-vue.git
The test page gives you an editor + live preview plus “generate share link” that encodes the input in the URL (with a fallback to open directly or pre-fill a GitHub Issue for long payloads).
If markstream-vue helps your work, you can support ongoing maintenance with one of these QR codes.
| Alipay | WeChat Pay |
|---|---|
![]() |
![]() |
pnpm add markstream-vue
# npm install markstream-vue
# yarn add markstream-vue
import MarkdownRender from 'markstream-vue'
// main.ts
import { createApp } from 'vue'
import 'markstream-vue/index.css'
createApp({
components: { MarkdownRender },
template: '<MarkdownRender custom-id="docs" :content="doc" />',
setup() {
const doc = '# Hello from markstream-vue\\n\\nSupports **streaming** nodes.'
return { doc }
},
}).mount('#app')
Import markstream-vue/index.css after your reset (e.g., Tailwind @layer components) so renderer styles win over utility classes. Install optional peers such as stream-monaco, shiki, stream-markdown, mermaid, and katex only when you need Monaco code blocks, Shiki highlighting, diagrams, or math.
If your app intentionally scales root font size on mobile, use markstream-vue/index.px.css to avoid rem-based global scaling side effects.
Renderer CSS is scoped under an internal .markstream-vue container to minimize global style conflicts. If you render exported node components outside of MarkdownRender, wrap them in an element with class markstream-vue.
For dark theme variables, either add a .dark class on an ancestor, or pass :is-dark="true" to MarkdownRender to scope dark mode to the renderer.
Prefer the unified code-block theme prop for new integrations. When you render through MarkdownRender, pass it via code-block-props:
<MarkdownRender
:is-dark="isDark"
:code-block-props="{ theme: { light: 'vitesse-light', dark: 'vitesse-dark' } }"
:content="doc"
/>
Language icons use the built-in material theme by default. Advanced integrations can inspect or switch icon themes with the exported helpers, or set an initial theme with app.use(VueRendererMarkdown, { iconTheme }):
import { getRegisteredThemes, setIconTheme } from 'markstream-vue'
console.log(getRegisteredThemes()) // ['material']
setIconTheme('material')
Use registerIconTheme() if you want to add your own icon pack.
Enable heavy peers only when needed:
import { enableKatex, enableMermaid } from 'markstream-vue'
import 'markstream-vue/index.css'
import 'katex/dist/katex.min.css'
// after you install `mermaid` / `katex` peers
enableMermaid()
enableKatex()
If you load KaTeX via CDN and want KaTeX rendering in a Web Worker (no bundler / optional peer not installed), inject a CDN-backed worker:
import { createKaTeXWorkerFromCDN, setKaTeXWorker } from 'markstream-vue'
const { worker } = createKaTeXWorkerFromCDN({
mode: 'classic',
// UMD builds used by importScripts() inside the worker
katexUrl: 'https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.js',
mhchemUrl: 'https://cdn.jsdelivr.net/npm/[email protected]/dist/contrib/mhchem.min.js',
})
if (worker)
setKaTeXWorker(worker)
If you load Mermaid via CDN and want off-main-thread parsing (used by progressive Mermaid rendering), inject a Mermaid parser worker:
import { createMermaidWorkerFromCDN, setMermaidWorker } from 'markstream-vue'
const { worker } = createMermaidWorkerFromCDN({
// Mermaid CDN builds are commonly ESM; module worker is recommended.
mode: 'module',
workerOptions: { type: 'module' },
mermaidUrl: 'https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.esm.min.mjs',
})
if (worker)
setMermaidWorker(worker)
// plugins/markstream-vue.client.ts
import { defineNuxtPlugin } from '#app'
import MarkdownRender from 'markstream-vue'
import 'markstream-vue/index.css'
export default defineNuxtPlugin((nuxtApp) => {
nuxtApp.vueApp.component('MarkdownRender', MarkdownRender)
})
Then use <MarkdownRender :content="md" /> in your pages.
pnpm dev — playground dev serverpnpm play:nuxt — Nuxt playground devpnpm build — library + CSS buildpnpm build:analyze — build with bundle visualizer reports (bundle-visualizer.html, bundle-visualizer-tailwind.html)pnpm size:check — run dist + npm package size budget checks (same guard used in CI)pnpm test — Vitest suite (pnpm test:update for snapshots)pnpm typecheck / pnpm lint — type and lint checksRender streamed Markdown (SSE/websocket) with incremental updates:
import type { ParsedNode } from 'markstream-vue'
import MarkdownRender, { getMarkdown, parseMarkdownToStructure } from 'markstream-vue'
import { ref } from 'vue'
const nodes = ref<ParsedNode[]>([])
const buffer = ref('')
const md = getMarkdown()
function addChunk(chunk: string) {
buffer.value += chunk
nodes.value = parseMarkdownToStructure(buffer.value, md)
}
// e.g., inside your SSE/onmessage handler
eventSource.onmessage = event => addChunk(event.data)
// template
// <MarkdownRender
// :nodes="nodes"
// :max-live-nodes="0"
// :batch-rendering="{
// renderBatchSize: 16,
// renderBatchDelay: 8,
// }"
// />
Switch rendering style per surface:
:max-live-nodes="0" for AI-like “typing” with lightweight placeholders.Pre-parse Markdown on the server or in a worker and render typed nodes on the client:
// server or worker
import { getMarkdown, parseMarkdownToStructure } from 'markstream-vue'
const md = getMarkdown()
const nodes = parseMarkdownToStructure('# Hello\n\nThis is parsed once', md)
// send `nodes` JSON to the client
<!-- client -->
<MarkdownRender :nodes="nodesFromServer" />
This avoids client-side parsing and keeps SSR/hydration deterministic.
initialNodes (and the raw initialMarkdown if you also stream later chunks).import type { ParsedNode } from 'markstream-vue'
import { getMarkdown, parseMarkdownToStructure } from 'markstream-vue'
import { ref } from 'vue'
const nodes = ref<ParsedNode[]>(initialNodes)
const buffer = ref(initialMarkdown)
const md = getMarkdown() // match server setup
function addChunk(chunk: string) {
buffer.value += chunk
nodes.value = parseMarkdownToStructure(buffer.value, md)
}
This avoids re-parsing SSR content while letting later SSE/WebSocket chunks continue the stream.
Tip: when you know the stream has ended (the message is complete), use
parseMarkdownToStructure(buffer.value, md, { final: true })or pass:final="true"to the component. This disables mid-state (loading) parsing so trailing delimiters (like$$or an unclosed code fence) won’t get stuck showing perpetual loading.
max-live-nodes at its default 320 to enable virtualization. Nodes render immediately and the renderer keeps a sliding window of elements mounted so long docs remain responsive without showing skeleton placeholders.:max-live-nodes="0" when you want a true typewriter effect. This disables virtualization and turns on incremental batching governed by batchRendering, initialRenderBatchSize, renderBatchSize, renderBatchDelay, and renderBatchBudgetMs, so new content flows in small slices with lightweight placeholders.Pick one mode per surface: virtualization for best scrollback and steady memory usage, or incremental batching for AI-style “typing” previews.
Tip: In chats, combine
max-live-nodes="0"with smallrenderBatchSize(e.g.,16) and a tinyrenderBatchDelay(e.g.,8ms) to keep the “typing” feel smooth without jumping large chunks. TunerenderBatchBudgetMsdown if you need to cap CPU per frame.
content vs nodes: pass raw Markdown or pre-parsed nodes (from parseMarkdownToStructure).max-live-nodes: 320 (default virtualization) or 0 (incremental batches).batchRendering: fine-tune batches with initialRenderBatchSize, renderBatchSize, renderBatchDelay, renderBatchBudgetMs.enableMermaid / enableKatex: (re)enable heavy peers or custom loaders when needed (pairs with disableMermaid / disableKatex).parse-options: reuse parser hooks (e.g., preTransformTokens, requireClosingStrong) on the component.final: marks end-of-stream; disables mid-state loading parsing and forces unfinished constructs to settle.custom-html-tags: extend streaming HTML allowlist for custom tags and emit them as custom nodes for setCustomComponents (e.g., ['thinking']).setCustomComponents(customId?, mapping): register inline Vue components for custom tags/markers (scoped by custom-id when provided).Example: map Markdown placeholders to Vue components (scoped)
import { setCustomComponents } from 'markstream-vue'
setCustomComponents('docs', {
CALLOUT: () => import('./components/Callout.vue'),
})
// Markdown: [[CALLOUT:warning title="Heads up" body="Details here"]]
Use the same custom-id on the renderer:
<MarkdownRender
:content="doc"
custom-id="docs"
/>
Parse hooks example (match server + client):
<MarkdownRender
:content="doc"
:parse-options="{
requireClosingStrong: true,
preTransformTokens: (tokens) => tokens,
}"
/>
mermaid / katex) and pass :enable-mermaid="true" / :enable-katex="true" or call the loader setters. If you load them via CDN script tags, the library will also pick up window.mermaid / window.katex.katex but still want off-main-thread rendering, create and inject a worker that loads KaTeX via CDN (UMD) using createKaTeXWorkerFromCDN() + setKaTeXWorker().markstream-vue/index.css once; use Shiki (MarkdownCodeBlockNode) when Monaco is too heavy. Infrequent language icons are split into an async chunk and load on demand; call preloadExtendedLanguageIcons() during app idle if you want to avoid first-hit icon fallback.setCustomComponents (global or scoped), then emit markers/placeholders in Markdown and map them to Vue components.| Needs | Typical Markdown preview | markstream-vue |
|---|---|---|
| Streaming input | Re-renders whole tree, flashes | Incremental batches with virtual windowing |
| Large code blocks | Slow re-highlight | Monaco streaming updates + Shiki option |
| Diagrams | Blocks while parsing | Progressive Mermaid with graceful fallback |
| Custom UI | Limited slots | Inline Vue components & typed nodes |
| Long docs | Memory spikes | Configurable live-node cap for steady usage |
[email protected] for parsing fixes.Build something with markstream-vue? Open a PR to add it here (include a link + 1 screenshot/GIF). Ideal fits: AI/chat UIs, streaming docs, diff/code-review panes, or Markdown-driven pages with embedded Vue components.
A short video introduces the key features and usage of markstream-vue:
Watch on Bilibili: Open in Bilibili
CodeBlockNode) or lightweight Shiki highlighting (MarkdownCodeBlockNode)stream-markdown-parser now documents how to reuse the parser in workers/SSE streams and feed <MarkdownRender :nodes> directly, plus APIs for registering global plugins and custom math helpers.Troubleshooting has moved into the docs:
https://markstream-vue-docs.simonhe.me/guide/troubleshooting
If you can't find a solution there, open a GitHub issue:
https://github.com/Simon-He95/markstream-vue/issues
Thanks to all the people who have contributed to this project!
This project uses and benefits from:
Thanks to the authors and contributors of these projects!