Index

Animated streaming markdown

December 2025

When the AI generates a meeting summary in Dara, the content streams in token by token. I wanted each block (heading, paragraph, list item) to fade in with a slight upward slide as it completes, and a scanner card to follow the bottom of the content using spring physics. When the stream finishes, the animated preview needs to swap seamlessly into an editable TipTap editor.

The streaming parser is streaming-markdown by Damian Tarnawski. It's a character-by-character state machine that calls add_token, end_token, and add_text on a renderer as it parses. The default renderer just creates DOM elements. The trick is wrapping it, but the upstream library needed two patches first.

The original end_token callback doesn't tell you which token ended. It just decrements an index. The animated renderer needs to know whether the ending token is a block-level element to decide if it should trigger a reveal, so I patched end_token to pass the token type through:

function end_token(p) {
    const ended_token = p.tokens[p.len]
    p.len -= 1
    p.token = p.tokens[p.len]
    p.renderer.end_token(p.renderer.data, ended_token)
}

The second patch is parser_close_all. The original library has parser_end which flushes pending text, but it doesn't close open blocks. When an AI stream ends, the last paragraph or list item is still "open" in the token stack and never gets its end_token call. Without closing it, the last block in the content would stay invisible. parser_close_all walks the stack and fires end_token for every remaining open block.

Animated renderer

With those changes, the createAnimatedRenderer function takes the default renderer and intercepts two callbacks. When a block-level token starts, the element gets opacity: 0 and translateY(8px). When it ends, a requestAnimationFrame flips both to their final values, triggering CSS transitions:

add_token: (data, token) => {
  base.add_token(data, token)

  if (BLOCK_TOKENS.has(token)) {
    const el = data.nodes[data.index]
    el.style.opacity = '0'
    el.style.transform = 'translateY(8px)'
    el.style.transition = `opacity ${ANIMATION_DURATION}ms ease-out,
      transform ${ANIMATION_DURATION}ms ease-out`
    pendingBlocks.set(data.index, el)
  }
},
end_token: (data, token) => {
  const el = pendingBlocks.get(data.index)
  base.end_token(data, token)

  if (el && BLOCK_TOKENS.has(token)) {
    pendingBlocks.delete(data.index + 1)
    requestAnimationFrame(() => {
      el.style.opacity = '1'
      el.style.transform = 'translateY(0)'
      onBlockRevealed(el.getBoundingClientRect().bottom - containerRect.top)
    })
  }
}

The onBlockRevealed callback reports the bottom position of each completed block. This drives the scanner card.

Scanner card

The scanner is a small floating element positioned absolutely below the last completed block. Its top value is animated with framer-motion's spring physics:

<motion.div
  className="absolute left-0 right-0"
  style={{ top: scannerTop }}
  animate={{ top: scannerTop }}
  transition={{ type: 'spring', damping: 25, stiffness: 300 }}
>
  <AnimatePresence>
    {scannerVisible && <ScannerCard />}
  </AnimatePresence>
</motion.div>

As blocks reveal and onBlockRevealed fires, scannerTop updates and the card springs downward to follow. The spring parameters give it a natural feel without overshooting too much.

A spacer div at the bottom set to scannerTop + SCANNER_HEIGHT ensures the parent container always has enough height for the card to sit below the content, preventing layout jumps as the stream grows.

Streaming to editable

The hard part is the handoff. While streaming, the user sees the animated preview rendered by smd. Once the stream completes, they need a fully editable TipTap editor with the same content. The swap has to be invisible.

Both renderers are mounted simultaneously. TipTap sits behind the streaming preview in an absolute inset-0 invisible container so it doesn't affect layout. A useReducer state machine manages the phases:

idle → streaming → completing → idle

When the stream ends, the state transitions to completing. During this phase, two things happen on staggered timeouts: TipTap gets hydrated with the final markdown via editor.commands.setContent, and the scanner card fades out. After the animation duration plus a small buffer, the state moves to idle, which removes the streaming preview and reveals TipTap underneath.

if (!isGenerating && wasGenerating) {
  dispatch({ type: 'GENERATION_ENDED' })

  setTimeout(() => dispatch({ type: 'HIDE_SCANNER' }),
    ANIMATION_DURATION + SCANNER_FADE_DELAY)

  setTimeout(() => dispatch({ type: 'ANIMATION_COMPLETE' }),
    ANIMATION_DURATION + EDITOR_SWAP_DELAY)
}

The generationKey on StreamingPreview forces a full remount on each new generation, which gives the parser a clean slate without needing manual reset logic.

Ownership tracking

One subtlety: when you navigate to a meeting that someone else is generating a summary for, you shouldn't see partial streaming content. A ownedStreamsRef set tracks which meetings the current user initiated. Stream content only updates the preview if the meeting is in the set. Everyone else just sees the final result appear via Zero sync once it's complete.