Your AI Chatbot Is Waiting to Be Asked.
Introducing the Autoplay Proactive SDK: the missing layer between what your users do and what your AI agents know.
The standard session replay experience, you’d pull up a recording after a user churned and work backwards, trying to figure out what went wrong. It was useful, but it was always too late.
So we built a pipeline to go deeper, ingesting sessions at scale, extracting structured intent signals: what workflows users were attempting, which tasks they completed, where they stalled. We built TERRA, a framework to understand the relationship between user actions and the ideal “golden path” through a product, model completion rates per workflow, and understand intent at the session level.
Then the obvious question was: why are we only looking at this after the fact?
The Problem With Reactive AI
Most AI copilots wait to be asked. A user opens the chat, types a question, gets an answer. It’s a slightly smarter FAQ, but it misses the majority of users who never open the chat. They hit a wall and quietly leave.
Take a PLG onboarding flow (e.g. SMSPs). New user signs up, no guided tour. They need to connect their Instagram account, upload their media, before they schedule their first post. They reach the auth screen, get confused, click back to the dashboard thinking they’re done. They try to post. Nothing works. First session. Often their last.
Your chatbot has no idea any of this is happening. Your support team can fix it, but only after the user is gone.
What the Data Stack Gets Wrong
You already have the data. LogRocket shows the user left the auth screen after 8 seconds. Amplitude shows a drop-off at onboarding step 3. Segment has a connect_instagram_started event with no completed match.
You know they left, where, and when. But none of these tools know what the user was trying to do, and without intent, goal, and progress, you can’t trigger action. You can’t tell your chatbot to intervene. You can’t send the right email. You just watch the funnel.
The gap isn’t in data collection. It’s in the layer that turns raw activity into something an AI agent can reason over.
Introducing the Autoplay Proactive SDK
Today we’re launching the Autoplay Proactive SDK- streaming what your users are doing, in real time, as clean, structured, LLM-ready context directly into your agents.
Not raw telemetry. Structured payloads: the page the user is on, what they clicked, what they’re trying to accomplish, where they’ve stalled. Autoplay sits between your product and your copilot, it doesn’t replace Intercom or Zendesk, it gives them eyes.
Pull: smarter answers when users ask. User asks “how do I schedule a post?” Without Autoplay, the chatbot returns a generic setup article. With Autoplay, it knows they reached the Meta auth screen 4 minutes ago and left without completing it, and delivers the exact missing step.
Push: intervention before they ask at all. Autoplay detects the missed step the moment it happens and fires your copilot: “It looks like Instagram isn’t connected yet, here’s the one step you need.” User completes the flow. Schedules their first post. Stays.
The Three-Layer Model
The SDK is a progressive build-up. Each layer makes your copilot meaningfully smarter.
Layer 1: Real-time events (available now.) Every user action is captured, processed (extraction → normalisation → optional LLM summarisation), and delivered as a typed payload. Your agent knows what the user is doing right now without waiting for them to describe it.
Layer 2: User memory (coming soon.) A per-user knowledge profile, updated after every session: what they’ve mastered, what’s in progress, what they’ve never touched. Your agent stops suggesting workflows the user already knows and starts surfacing the actual gaps.
Layer 3: Golden paths + knowledge base (coming soon.) Record the ideal journey for any workflow using the Autoplay Chrome extension. Autoplay indexes these in a vector database. At inference time, your agent retrieves the ideal path, compares it to what this user has done, and surfaces the precise next step.
Where the user is now ← real-time events
What they've already done ← user memory
Where they should be going ← golden path from knowledge base
↓
copilot surfaces the single most relevant next stepGetting Started
Integration takes minutes.
See our full docs here.
Step 1 — Add the snippet to your frontend
import posthog from 'posthog-js'
posthog.init('YOUR_AUTOPLAY_API_KEY', {
api_host: 'https://us.i.posthog.com',
person_profiles: 'identified_only',
session_idle_timeout_seconds: 120,
loaded: (posthog) => {
posthog.identify(posthog.get_distinct_id(), {
product_id: 'YOUR_AUTOPLAY_PRODUCT_ID',
});
},
})
After login, pass the user’s email to enable cross-session identity linking:
posthog.identify(user.id, {
product_id: 'YOUR_AUTOPLAY_PRODUCT_ID',
email: user.email,
})
Step 2: Install the SDK
pip install autoplay-sdkRequires Python 3.10+
Step 3 — Receive your first event
import asyncio
from autoplay_sdk import AsyncConnectorClient
async def main():
async with AsyncConnectorClient(url=STREAM_URL, token=API_TOKEN) as client:
client.on_actions(lambda p: print(p.to_text()))
client.on_summary(lambda p: print(p.summary))
await client.run()
asyncio.run(main())Step 4 — Wire it to your copilot
from autoplay_sdk import AsyncConnectorClient, ActionsPayload
async def on_actions(payload: ActionsPayload) -> None:
suggestion = await your_llm(
system="You are a proactive product copilot. Suggest one helpful next step. Be brief.",
user=f"## What the user is doing\n{payload.to_text()}",
)
if suggestion:
await push_to_ui(payload.session_id, suggestion)
client = AsyncConnectorClient(url=CONNECTOR_URL, token=API_KEY)
client.on_actions(on_actions)payload.to_text() is embedding-ready — pass it into your LLM context, upsert it to a vector store, or use it to trigger any downstream action: email, Slack notification, support escalation with full context attached.
Delivery is available as an SSE stream or push webhook. Both emit the same typed ActionsPayload / SummaryPayload objects.
What You Can Build
Reduce first-session churn. Detect exactly where users stall and trigger a contextual nudge before they leave in-app, chatbot, or email.
Eliminate generic chatbot answers. Every time a user opens support, your agent already knows what they were doing in the product 30 seconds ago.
The reboarding play. User has been on your platform three months but has never opened Analytics. Autoplay detects the gap. Copilot fires: “Want to see your best-performing posts?” Analytics adoption. Upgrade trigger.
Proactive upgrade signals. User hits a limit and starts exploring pricing — Autoplay detects the sequence and triggers your sales motion before they bounce.
Live-context RAG. Embed ActionsPayload events into your vector store in real time for retrieval grounded in what users actually do — not just your docs.
Join the Private Beta
Early access teams get the full SDK, a dedicated connector URL and API token, direct Slack support from us, and early influence on the roadmap.
→ Sign-up to our private beta access
→ Join our Slack workspace — drop a message in #just-integrated after you add the snippet and we’ll get your connector set up same day.
→ Read the docs
What We Believe
The future of AI in software isn’t a chat widget that waits. It’s an agent that knows — what this user is trying to do, where they are right now, what they’ve done before, and what comes next. Everything we’ve built — the session analysis, intent modelling, golden path framework, the pipeline from raw DOM events to LLM-ready context, was pointing here.
We think this changes the economics of onboarding, support, and activation. If you’re building AI-powered products and want your agents to actually know what’s happening, come build with us.

