<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Autoplay AI]]></title><description><![CDATA[Helping your users reach time to value immediately on your software]]></description><link>https://blog.autoplay.ai</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 03:16:54 GMT</lastBuildDate><atom:link href="https://blog.autoplay.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Marie GD]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[autoplay-ai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[autoplay-ai@substack.com]]></itunes:email><itunes:name><![CDATA[Autoplay AI]]></itunes:name></itunes:owner><itunes:author><![CDATA[Autoplay AI]]></itunes:author><googleplay:owner><![CDATA[autoplay-ai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[autoplay-ai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Autoplay AI]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Making Intercom Chat Proactive with Real-Time Sessions]]></title><description><![CDATA[A practical guide to URL triggers, event-based messages, and real-time session context in Intercom.]]></description><link>https://blog.autoplay.ai/p/making-intercom-chat-proactive-with</link><guid isPermaLink="false">https://blog.autoplay.ai/p/making-intercom-chat-proactive-with</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Fri, 17 Apr 2026 13:31:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/659593c9-fe60-4f27-99c1-c131c2b612c6_1500x1330.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fin, Intercom&#8217;s AI agent, are mostly set up to be reactive by default. A user opens the chat, types something, and Fin answers from the sources you have connected to it. That works for a lot of support questions, but it leaves out what happened in the product before the user opened chat.</p><p>There are three useful ways to make Fin more proactive, each requiring more setup than the last.</p><p>Before getting into the levels, here is the difference it makes in practice:</p><p><strong>Without session context</strong></p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;df0395f3-3153-499b-a9da-ca5d1f1c4833&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">User: I can&#8217;t change my plan.

Fin:  Here&#8217;s how to change your plan: [help article link]
</code></pre></div><p><strong>With session context</strong></p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;8c254e81-e6fc-4e67-9a55-ea41c79acb9b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">Internal note: User visited /billing, selected Pro plan, 
submitted checkout, hit a payment validation error.

User: I can&#8217;t change my plan.

Fin:  It looks like the payment step didn&#8217;t complete &#8212; the card validation failed.

      Try re-entering your card details or use a different payment method.</code></pre></div><h2><strong>What Fin depends on</strong></h2><p>Fin is not a static chatbot. Its behavior depends on the sources you give it and the instructions you configure.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vOEF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vOEF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 424w, https://substackcdn.com/image/fetch/$s_!vOEF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 848w, https://substackcdn.com/image/fetch/$s_!vOEF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 1272w, https://substackcdn.com/image/fetch/$s_!vOEF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vOEF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png" width="1456" height="793" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:793,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vOEF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 424w, https://substackcdn.com/image/fetch/$s_!vOEF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 848w, https://substackcdn.com/image/fetch/$s_!vOEF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 1272w, https://substackcdn.com/image/fetch/$s_!vOEF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42d4a0b-2a75-4d9e-ab20-e233fb305f40_2048x1116.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You can make it answer from help center articles, product docs, internal notes, and other structured inputs. You can also decide which sources matter most in a given flow, which documents should be treated as primary, and how Fin should respond when sources conflict.</p><p>For example, billing docs might be primary inside a checkout flow, while general onboarding docs are secondary.</p><p>The limitation is usually not the model. It is what Fin can see. If Fin can only see the user&#8217;s message and your docs, it can give a good generic answer. If the conversation also includes the user&#8217;s product activity, workflow, and recent actions, Fin has a better starting point.</p><h2><strong>The two common Intercom setups</strong></h2><p>Most teams start in one of two places.</p><p>Some configure Fin mostly inside Intercom: Workflows, routing, articles, canned replies, and outbound messages.</p><p>Others take a more developer-led approach and connect Intercom to internal APIs, user data, and custom logic.</p><p>Both are useful, but neither automatically gives Fin live product context.</p><h2><strong>Level 1: URL and page targeting in Workflows</strong></h2><p>The simplest way to make Fin proactive is using Intercom&#8217;s Workflows to trigger a message when a user visits a specific page. No custom events, no API calls, just URL rules and a workflow.</p><p>In the Intercom workflow builder, set your trigger to <strong>&#8220;Customer visits a page&#8221;</strong> and add a page rule:</p><ul><li><p><strong>URL contains /pricing</strong>: triggers for any URL with /pricing in the path</p></li><li><p><strong>URL starts with <a href="https://app.yourproduct.com/billing">https://app.yourproduct.com/billing</a></strong>: targets all billing subpages</p></li><li><p><strong>Exact match</strong>: triggers only on one specific URL, including query parameters</p></li></ul><p>You can also layer in <strong>time on page</strong> as a condition. For example: trigger only if the user has been on /pricing for more than 60 seconds. That is already better than firing the chatbot the second someone lands on a page.</p><p>At the end of the workflow path, add a <strong>&#8220;Let Fin answer&#8221;</strong> step. When the user replies to the workflow message, Fin takes over and handles the conversation, pulling from your help docs, product articles, and system prompt instructions.</p><p>This gives you basic location-aware and timing-aware support with no code. The limitation is that the workflow is still based on a page rule. Fin knows where the user is, but not much about what they did before or after that page view.</p><h2><strong>Level 2: Event-based triggers</strong></h2><p>If you are tracking custom events in Intercom, you can use them to trigger messages, a step closer to behavior-based proactivity.</p><p>Track events via the JavaScript API:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;feeda856-fde7-4cf7-b416-c6dce66924d2&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">window.Intercom(&#8217;trackEvent&#8217;, &#8216;selected-plan&#8217;, {

  plan_name: &#8216;Pro&#8217;,

  billing_cycle: &#8216;annual&#8217;

});</code></pre></div><p>Once that event is tracked, you can create an outbound message in Intercom that fires every time the event is recorded. In the message rules, set <strong>&#8220;When&#8221;</strong> to your event, then configure frequency to avoid over-messaging. For example, at most once per hour per user.</p><p>You can also filter by <strong>event metadata</strong>. For example, only trigger the message if billing_cycle is annual, or layer in a <strong>&#8220;Where&#8221;</strong> rule like Current URL contains /checkout.</p><p>To keep Fin in the loop, add <strong>&#8220;Let Fin respond&#8221;</strong> as a follow-up action on the outbound message. When the user replies, Fin picks up the thread.</p><p>This is better than URL targeting. You are reacting to things users actually did, not just where they are.</p><p>But there is an important limitation: event-based messages use events as triggers, not as context.</p><p>You can manually define an event like selected-plan and trigger a message when it happens. But that does not mean Fin understands the sequence of events that led up to the conversation.</p><p>The event can start the conversation, but it does not become part of Fin&#8217;s reasoning unless you connect that event data back into the conversation as context.</p><p>That is the difference between event-based triggers and real-time session context.</p><h2><strong>Level 3: Real-time session events</strong></h2><p>The goal is not just to trigger a chatbot response from an event. The goal is to make the user&#8217;s recent product activity available as context inside the Intercom conversation.</p><p>That means connecting session events as a data source for the conversation, not just using them as one-off triggers.</p><p>You do not need every raw event. You need the actions that help Fin or the support team understand what the user was trying to do, what happened before they opened chat, and where the workflow broke down.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dVY3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dVY3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 424w, https://substackcdn.com/image/fetch/$s_!dVY3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 848w, https://substackcdn.com/image/fetch/$s_!dVY3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 1272w, https://substackcdn.com/image/fetch/$s_!dVY3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dVY3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png" width="1444" height="1418" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1418,&quot;width&quot;:1444,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dVY3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 424w, https://substackcdn.com/image/fetch/$s_!dVY3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 848w, https://substackcdn.com/image/fetch/$s_!dVY3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 1272w, https://substackcdn.com/image/fetch/$s_!dVY3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69029408-0822-4cde-bd1a-000c80a05e96_1444x1418.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The pattern has four parts:</p><ol><li><p>capture product events</p></li><li><p>link the product session to the right Intercom conversation</p></li><li><p>write useful activity into the conversation as internal notes</p></li><li><p>summarize and clean up older notes so the thread stays readable</p></li></ol><p>This is where a tool like Autoplay can help. Autoplay provides an <a href="https://developers.autoplay.ai/sdk/overview">SDK</a> for sending real-time product activity into chat and support tools. For Intercom, it handles the session linking, buffering, internal note writing, redaction, and summarization.</p><p>The pipeline looks like this:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;dd6a60f5-fbdd-4f4f-bdd4-0057602178c3&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">Product events (PostHog)

&#8594; Autoplay links the session to an Intercom conversation

&#8594; Autoplay writes internal notes into the conversation thread

&#8594; Those notes become context inside the Intercom conversation</code></pre></div><p>Internal notes are the bridge. They let you put product activity into the Intercom conversation without showing it to the user. The support team can read them, and Fin can use that context depending on how your Intercom workspace is configured.</p><p>In practice, you should write only the product actions Fin or the support team need, not raw sensitive input values.</p><h3><strong>1. Capture product events</strong></h3><p>You need a stream of product activity: page views, clicks, form submissions, errors, and workflow events.</p><p>Autoplay uses PostHog-compatible capture for this. You add the snippet to your frontend, identify the user after login, and pass the fields Autoplay needs to connect that product session back to Intercom.</p><h3><strong>2. Link the product session to the Intercom conversation</strong></h3><p>Product events only help if they end up in the right thread.</p><p>Autoplay uses Intercom webhooks to know when a user starts or replies to a conversation. The key webhook topics are:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;d64c4416-a03f-4399-a6bb-285e9bcd8127&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">conversation.user.created

conversation.user.replied</code></pre></div><p>Once Autoplay links a product session to an Intercom conversation, that mapping is kept stable so later events continue going into the same thread.</p><h3><strong>3. Write internal notes into Intercom</strong></h3><p>Once the session is linked, Autoplay writes product activity into the conversation as internal notes.</p><p>Those notes are visible to the support team, but not to the user. They give Intercom a live picture of what the user is doing in the product, for example:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;a6720b6d-af36-4a6f-b3f2-b845b10e6208&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">[1] User clicked Sign up button on the pricing page

[2] User clicked Confirm plan button on the checkout page

[3] User submitted payment form on the checkout page</code></pre></div><p>In practice, you should only write the product actions Fin or the support team need, not raw sensitive input values.</p><h3><strong>4. Buffer events before the conversation is linked</strong></h3><p>Sometimes product events arrive before Autoplay knows which Intercom conversation they belong to.</p><p>In that case, Autoplay buffers the actions. Once the session is linked to a conversation, the buffered activity is flushed into the right thread.</p><h3><strong>5. Summarize and clean up older notes</strong></h3><p>Raw action notes are useful, but you do not want the Intercom thread to become a wall of events.</p><p>Autoplay can summarize the session after a configurable threshold of actions and redact older raw notes. The summary is written back into Intercom as a cleaner internal note:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;6cf27e5d-22c8-48f3-b851-3dd20a96a3fc&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">The user navigated to the pricing page, selected the Pro plan,

submitted the checkout form, and hit a payment validation error.

They later returned to billing settings.</code></pre></div><p>The summarization step is separate from Fin. Fin and the support team then have that summary available inside the Intercom conversation, depending on how your workspace is configured.</p><p>For the full implementation, including the snippet, webhook setup, session mapping, note writing, redaction, and summarization, see the<a href="https://developers.autoplay.ai/integrations/intercom"> Autoplay Intercom integration guide</a>.</p><h2><strong>What this changes</strong></h2><p>With session notes in the thread, Fin and the support team have context before the user explains the problem.</p><p>If someone opens chat and asks &#8220;why can&#8217;t I change my plan,&#8221; the conversation already has a note showing they visited billing, selected Pro, submitted checkout, and hit a payment validation error.</p><p>Instead of asking the user to retrace their steps, the team can read the session summary and pick up from there.</p><h2><strong>Picking the right level</strong></h2><p>Most teams do not start with real-time session context, and they do not need to. URL and event triggers cover a lot of ground and are worth setting up properly before adding anything more complex.</p><p>But if your support team regularly spends time asking users to retrace their steps, or if Fin is giving technically correct answers that miss the actual workflow the user is stuck in, that is a signal that session context would help.</p><p>The Intercom docs for<a href="https://www.intercom.com/help/en/articles/12819568-configure-page-url-targeting-rules-for-workflows"> URL targeting</a>,<a href="https://www.intercom.com/help/en/articles/5180516-send-repeatable-messages-based-on-events-you-track-in-intercom"> event-based messages</a>, and<a href="https://www.intercom.com/help/en/articles/8194403-let-fin-ai-agent-follow-up-after-proactive-support-messages-or-workflows"> letting Fin follow up after Workflows</a> are good starting points for the first two levels.</p><p>If you want to add the real-time layer, the <a href="https://developers.autoplay.ai/recipes/intercom">Autoplay Intercom integration guide</a> walks through the implementation: webhooks, session mapping, note writing, and summarization.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/making-intercom-chat-proactive-with?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/making-intercom-chat-proactive-with?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Why we started with PostHog at Autoplay]]></title><description><![CDATA[Because we needed real-time behavioral infrastructure, not another analytics dashboard.]]></description><link>https://blog.autoplay.ai/p/why-we-started-with-posthog-at-autoplay</link><guid isPermaLink="false">https://blog.autoplay.ai/p/why-we-started-with-posthog-at-autoplay</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Tue, 14 Apr 2026 13:19:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/84e3cc95-d62e-4cbe-95ee-eea60c21df5b_924x512.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We just launched the <a href="https://developers.autoplay.ai/quickstart">Autoplay SDK.</a></p><p>That makes this a useful moment to explain a design choice that sits underneath it: why our real-time architecture started with PostHog.</p><p>Autoplay is not a reporting layer. It is infrastructure for turning live product behavior into model-readable context. The SDK is the public interface to that system. You connect your app, stream real-time user activity into your service, and receive typed payloads that can be embedded, stored, retrieved, and passed into agents or copilots. In our docs, that flow is explicit: browser activity enters the Autoplay connector pipeline, gets extracted, normalized, and optionally summarized, then arrives over SSE or push webhook as typed payloads your application can use directly.</p><p>That architecture needs a specific kind of event layer.</p><p>It needs broad browser-side capture with low frontend burden. It needs identity and session continuity. It needs an event model that is simple enough to ingest at scale but expressive enough to carry useful context. It needs a clean way to route that stream into your own infrastructure in real time. It needs enough structure that downstream systems can reason over it programmatically. And it needs pricing that makes dense behavioral capture practical, not something you immediately start trimming.</p><p>Those are the reasons we started with PostHog.</p><p>That phrasing matters. We do plan to support other analytics layers over time. But when we were building the first version of this architecture, PostHog was the cleanest fit for the problem in front of us. It gave us the best starting point for building a real-time behavioral backbone that could power agents, copilots, memory systems, and retrieval pipelines, not just dashboards.</p><h2><strong>The architecture we actually run</strong></h2><p>In our standard setup, a customer installs posthog-js in the frontend, initializes it with their Autoplay API key, attaches a product_id, and identifies the user again after login with their application user ID and optionally email. That gives us the behavioral stream, the routing key, and the identity bridge we need for session and conversation linking. Our quickstart also uses a two-minute session idle timeout so the stream stays aligned with how we package live context downstream.</p><p>From there, we configure a PostHog webhook destination pointed at the customer&#8217;s Autoplay connector. That is the important boundary in the system. Before that point, you have events. After that point, you have context.</p><p>Autoplay takes the incoming stream and turns it into something agent systems can actually use. We extract raw browser activity into normalized actions, label those actions in natural language, group them into sessions, and optionally summarize the session into compact prose. The SDK then exposes that as typed ActionsPayload and SummaryPayload objects over SSE or push webhook. Those payloads are designed to flow straight into vector stores, memory systems, copilots, and RAG pipelines.</p><p>The path looks like this:</p><ol><li><p>posthog-js captures browser activity.</p></li><li><p>PostHog ingests the events and attaches user and session context.</p></li><li><p>PostHog forwards selected events to the Autoplay connector.</p></li><li><p>Autoplay extracts, normalizes, labels, groups, and optionally summarizes the session.</p></li><li><p>autoplay-sdk consumes the resulting stream over SSE or push webhook.</p></li><li><p>Your service embeds that context, writes it to memory or retrieval infrastructure, and passes it into a model.</p></li><li><p>The model decides whether to stay silent, answer reactively, or trigger proactively.<br><br></p></li></ol><p>That is the architecture we use ourselves. But the more useful point is that it is also a pattern other teams can adopt. If you want support systems that know what the user was doing before they opened chat, copilots that react to real workflow state, or memory systems that update from product behavior instead of only from text conversations, this is the kind of event backbone you need.</p><h2><strong>Why the event model mattered</strong></h2><p>The core unit in PostHog is an event: a user or distinct_id, a timestamp, an event name, and a JSON-like property blob. Those properties can include text, booleans, numbers, dates and timestamps, arrays, and nested objects. That sounds basic, but it is exactly what made the model useful outside dashboards. It is flexible enough to carry product-specific context, and typed enough to stay understandable as the system grows.</p><p>That flexibility also matters operationally. Teams can define event schemas, manage event definitions, and group properties so structure stays consistent and discoverable over time rather than drifting into a mess of ad hoc payloads. Events can also be enriched with user properties, group properties, and custom metadata, which matters because downstream analysis is rarely based on a naked click stream. Useful systems usually need organization-level context, user state, workflow identifiers, route keys, and product metadata attached to the underlying behavior.</p><p>That is what made PostHog a strong starting point for us.</p><p>If you are building agent systems, the raw requirement is not &#8220;have events.&#8221; It is &#8220;have events that can be turned into state.&#8221; You need enough flexibility to attach product IDs, route keys, workflow hints, or metadata without fighting the schema. You need enough structure that your application can reason over the stream programmatically. And you need enough consistency that models are not being fed an ever-changing mess of telemetry. PostHog gave us a strong base layer for that, and then Autoplay does the second half of the work by turning that event stream into typed actions and readable session summaries.</p><h2><strong>Why capture quality mattered</strong></h2><p>If you are trying to build on top of behavior, instrumentation burden matters as much as event quality.</p><p>PostHog&#8217;s web SDK automatically captures pageviews, pageleaves, clicks, input changes, and form submissions for common interactive elements by default. That meant teams could get broad behavioral coverage quickly rather than spending weeks manually instrumenting every workflow before they could test anything useful. For our use case, that mattered because the captured stream is not the final product. It is the substrate we transform into agent context. The faster you can get a high-coverage stream, the faster you can start turning real behavior into something your models can use.</p><p>This is one of the biggest reasons it made sense as our starting point. A product team, support engineering team, or platform team can start with the same behavioral stream they already want for analytics, then use that same stream to power runtime systems. That means fewer duplicate SDKs, fewer competing schemas, and less drift between what the dashboard says happened and what the copilot thinks happened.</p><h2><strong>Why routing mattered</strong></h2><p>A lot of analytics products are good at collecting events and surfacing them in charts. That is not the same as being comfortable in the middle of a live product system.</p><p>For our use case, the event layer had to be adjacent to runtime. It had to let events move outward in real time into our own pipeline, not stop inside a reporting product. PostHog&#8217;s platform explicitly distinguishes between public capture endpoints and private query endpoints, and supports real-time destinations and webhooks in its broader product surface. It also supports transformation and enrichment patterns at ingestion time, which helps keep downstream data clean and structured before it reaches the systems that actually consume it.</p><p>Its public POST-only endpoints, including event capture at /i/v0/e, are not rate-limited, while private authenticated endpoints are rate-limited separately. For many analytics endpoints, the documented limits are about 240 requests per minute and 1,200 per hour. The query endpoint allows up to 2,400 per hour. Other private read and write APIs are capped around 480 per minute and 4,800 per hour per API key.</p><p>That separation is exactly what you want when the critical path is capture and forwarding, not dashboard queries.</p><p>In other words, if you are building agents, copilots, or internal systems that need the behavioral stream while the session is still alive, the shape of the API matters. You do not want the useful path to depend on a reporting endpoint. You want ingestion to feel like infrastructure. PostHog was a strong fit for that, which is why we started there.</p><p>I am also deliberately being precise here on latency. We treat the stream as effectively live in practice, which is what matters for our architecture. I am not making a hard benchmark claim here about exact milliseconds, and I am not claiming exact queue behavior for other vendors without controlled benchmarks. The important point is architectural: PostHog was designed in a way that made real-time capture and routing practical for our first version.</p><h2><strong>Why sessions and replay mattered</strong></h2><p>Single events are useful. Real user workflows are sequences.</p><p>PostHog supports both event capture and session replay. That mattered because teams often need both views at once: the granular stream for live state, and the session-level view for validating friction, detours, hesitation, and workflow breakdowns.</p><p>Session recordings are a separate unit from standard analytics events. They are stored and priced per session or recording, not per event. They are also more expensive per item than standard analytics events, which makes sense because they are a denser resource. PostHog&#8217;s pricing treats session replay as a separate resource, with a free tier and then tiered per-recording pricing beyond that. Product analytics events are priced separately, starting at roughly $0.00005 per event after the first 1 million monthly events, which are free.</p><p>Architecturally, that distinction is useful.</p><p>The high-volume event stream is what powers real-time state and downstream decisions. Session replay is the denser, slower path you use to inspect complete workflows, validate behavior, and build better models of how users actually move through the product. It helps you see detours, friction points, hesitation, and navigation patterns that matter when you are trying to understand real workflows instead of isolated clicks. If you care about product-aware AI, having both event-level and session-level data available in one stack is a real advantage.</p><h2><strong>Why pricing was part of the technical decision</strong></h2><p>Pricing is not a procurement detail here. It shapes what you can build.</p><p>If your event layer is too expensive, teams start deleting the very signal that makes behavioral systems useful. PostHog&#8217;s usage-based pricing keeps the basic model simple: the first 1 million product analytics events per month are free, then pricing starts at $0.00005 per event and decreases with scale. That is about $50 per 1 million events at the starting paid tier. Session replay has its own free tier and tiered pricing by recording, and those recordings are priced separately because they are a different resource than standard analytics events.</p><p>That mattered for us, but it also matters for anyone experimenting with real-time support or product-aware copilots. You need enough behavioral density to make the model useful. If every additional event feels like a billing risk, the system gets blind very quickly.</p><p>It also means startups and early-stage teams can get a meaningful amount of signal before the event layer becomes a real cost center. If you are capturing tens or even hundreds of thousands of events per month, you are still working in a pricing model that feels like infrastructure, not something that forces aggressive under-instrumentation on day one.</p><h2><strong>Where the SDK fits in</strong></h2><p>This is also why we launched the SDK in the shape we did.</p><p>autoplay-sdk is not another analytics SDK. It is the interface to the transformed stream. It consumes the connector over SSE or push webhook and hands your application typed Python objects, not raw JSON blobs. ActionsPayload gives you session ID, user ID, product ID, ordered actions, counts, and an embedding-ready .to_text() method. SummaryPayload gives you a prose version of the session when you want compact context instead of a full action list. The SDK also supports async and sync clients, push and pull delivery, Redis-backed buffering, and per-session concurrency isolation, so a slow downstream job in one session does not block another.</p><p>A simple frontend setup looks like this:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;7e5f01b1-bdc7-4903-b320-0fdd8f4872d1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">import posthog from &#8216;posthog-js&#8217;

posthog.init(&#8217;YOUR_AUTOPLAY_API_KEY&#8217;, {

    api_host: &#8216;https://us.i.posthog.com&#8217;,

    person_profiles: &#8216;identified_only&#8217;,

    session_idle_timeout_seconds: 120,

    loaded: (posthog) =&gt; {

        posthog.identify(posthog.get_distinct_id(), {

            product_id: &#8216;YOUR_AUTOPLAY_PRODUCT_ID&#8217;,

        });

    },

})

// after login

posthog.identify(user.id, {

    product_id: &#8216;YOUR_AUTOPLAY_PRODUCT_ID&#8217;,

    email: user.email,

})</code></pre></div><p>And then the downstream service consumes the live stream:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;ea446194-7616-47d8-a7e2-30b9e28760c1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">import asyncio

from autoplay_sdk import AsyncConnectorClient

STREAM_URL = &#8220;https://your-connector.onrender.com/stream/YOUR_PRODUCT_ID&#8221;

API_TOKEN = &#8220;unkey_xxxx...&#8221;

async def on_actions(payload):

    text = payload.to_text()

    embedding = await embed_api.create(input=text)

    await vector_store.upsert(id=payload.session_id, vector=embedding)

async def on_summary(payload):

    text = payload.to_text()

    await vector_store.upsert(id=payload.session_id, vector=await embed(text))

async def main():

    async with AsyncConnectorClient(url=STREAM_URL, token=API_TOKEN) as client:

        client.on_actions(on_actions)

        client.on_summary(on_summary)

        await client.run()

asyncio.run(main())</code></pre></div><p>The point is not just that this works. It is that customers do not need to build the interpretation layer themselves. PostHog captures the live behavioral stream. Autoplay turns it into typed, LLM-ready context. The SDK makes that context easy to consume.</p><p>That is useful whether you are building with Autoplay specifically or thinking about the general pattern. The better the event backbone is, the less custom interpretation code you need to write yourself.</p><h2><strong>What this enables in practice</strong></h2><p>This is not just good for us. It is useful for a lot of teams working close to product behavior.</p><p>A support team can build a copilot that already knows what the user was doing before they opened chat.</p><p>A product team can build a proactive assistant that notices repeated no-progress behavior and intervenes at the right moment.</p><p>A growth or onboarding team can track whether someone has actually completed a workflow, not just visited a page.</p><p>An engineering team can update user memory or retrieval systems from real behavior instead of only from text inputs.</p><p>These are the kinds of use cases our docs are built around: real-time events first, then memory, then golden paths, then trigger logic that decides when to proactively help and when to stay quiet.</p><p>The key point is that they can all share the same behavioral stream.</p><p>That is the real architectural benefit. You do not need one telemetry layer for analytics and a second completely separate telemetry layer for runtime AI. PostHog gives you the capture, identity, sessioning, and event backbone. Autoplay turns that stream into typed, model-readable context. The SDK exposes it in a form developers can use directly.</p><h2><strong>The real reason we started with PostHog</strong></h2><p>The shortest version is this:</p><p>We did not need a tool that was only good at telling us what happened last week.</p><p>We needed a tool that could sit in the loop between user behavior and system response.</p><p>That meant the event layer had to capture enough detail without heavy manual instrumentation, preserve identity and session continuity, support a flexible but typed data model, allow enrichment with user, group, and product context, route data outward in real time, remain open to programmatic consumption, support both events and sessions, and stay affordable enough to use as infrastructure.</p><p>PostHog had that combination.</p><p>That does not mean it is the only analytics layer we will ever support. It means it was the right place to start.</p><p>At Autoplay, that is what let us take a live behavioral stream, turn it into typed LLM-ready context, and use it to power agents and copilots in the moment. But the more general point is this: if you want product data to do more than describe the past, you need an event backbone that is comfortable powering systems outside the dashboard too.</p><p>That is why we started with PostHog. And that is why the setup is useful for our customers as well.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/why-we-started-with-posthog-at-autoplay/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/why-we-started-with-posthog-at-autoplay/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Your AI Chatbot Is Waiting to Be Asked. ]]></title><description><![CDATA[Introducing the Autoplay Proactive SDK: the missing layer between what your users do and what your AI agents know.]]></description><link>https://blog.autoplay.ai/p/your-ai-chatbot-is-waiting-to-be</link><guid isPermaLink="false">https://blog.autoplay.ai/p/your-ai-chatbot-is-waiting-to-be</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Mon, 13 Apr 2026 18:51:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7945525c-b0f0-4c4d-ab0f-b61fbd93c9f0_740x740.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The standard session replay experience, you&#8217;d pull up a recording after a user churned and work backwards, trying to figure out what went wrong. It was useful, but it was always too late.</p><p>So we built a pipeline to go deeper, ingesting sessions at scale, extracting structured intent signals: what workflows users were attempting, which tasks they completed, where they stalled. We built <code>TERRA</code>, a framework to understand the relationship between user actions and the ideal &#8220;golden path&#8221; through a product, model completion rates per workflow, and understand intent at the session level.</p><p>Then the obvious question was: <strong>why are we only looking at this after the fact?</strong></p><h2><strong>The Problem With Reactive AI</strong></h2><h4>Most AI copilots wait to be asked. A user opens the chat, types a question, gets an answer. It&#8217;s a slightly smarter FAQ, but it misses the majority of users who never open the chat. They hit a wall and quietly leave.</h4><p>Take a PLG onboarding flow (e.g. SMSPs). New user signs up, no guided tour. They need to connect their Instagram account, upload their media, before they schedule their first post. They reach the auth screen, get confused, click back to the dashboard thinking they&#8217;re done. They try to post. Nothing works. First session. Often their last.</p><p>Your chatbot has no idea any of this is happening. Your support team can fix it, but only after the user is gone.</p><h4><strong>What the Data Stack Gets Wrong</strong></h4><p>You already have the data. LogRocket shows the user left the auth screen after 8 seconds. Amplitude shows a drop-off at onboarding step 3. Segment has a <code>connect_instagram_started</code> event with no completed match.</p><p>You know they left, where, and when. But none of these tools know what the user was <em>trying</em> to do, and without intent, goal, and progress, you can&#8217;t trigger action. You can&#8217;t tell your chatbot to intervene. You can&#8217;t send the right email. You just watch the funnel.</p><p>The gap isn&#8217;t in data collection. It&#8217;s in the layer that turns raw activity into something an AI agent can reason over.</p><h2><strong>Introducing the Autoplay Proactive SDK</strong></h2><h4>Today we&#8217;re launching the <a href="https://autoplayai.mintlify.app/quickstart">Autoplay Proactive SDK</a>- streaming what your users are doing, in real time, as clean, structured, LLM-ready context directly into your agents.</h4><p>Not raw telemetry. Structured payloads: the page the user is on, what they clicked, what they&#8217;re trying to accomplish, where they&#8217;ve stalled. Autoplay sits between your product and your copilot, it doesn&#8217;t replace Intercom or Zendesk, it gives them eyes.</p><p><strong>Pull: smarter answers when users ask.</strong> User asks &#8220;how do I schedule a post?&#8221; Without Autoplay, the chatbot returns a generic setup article. With Autoplay, it knows they reached the Meta auth screen 4 minutes ago and left without completing it, and delivers the exact missing step.</p><p><strong>Push: intervention before they ask at all.</strong> Autoplay detects the missed step the moment it happens and fires your copilot: &#8220;It looks like Instagram isn&#8217;t connected yet, here&#8217;s the one step you need.&#8221; User completes the flow. Schedules their first post. Stays.</p><h4><strong>The Three-Layer Model</strong></h4><p>The SDK is a progressive build-up. Each layer makes your copilot meaningfully smarter.</p><p><strong>Layer 1: Real-time events (available now.)</strong> Every user action is captured, processed (extraction &#8594; normalisation &#8594; optional LLM summarisation), and delivered as a typed payload. Your agent knows what the user is doing right now without waiting for them to describe it.</p><p><strong>Layer 2: User memory</strong> <em>(coming soon.) </em>A per-user knowledge profile, updated after every session: what they&#8217;ve mastered, what&#8217;s in progress, what they&#8217;ve never touched. Your agent stops suggesting workflows the user already knows and starts surfacing the actual gaps.</p><p><strong>Layer 3: Golden paths + knowledge base</strong> (<em>coming soon.) </em>Record the ideal journey for any workflow using the Autoplay Chrome extension. Autoplay indexes these in a vector database. At inference time, your agent retrieves the ideal path, compares it to what this user has done, and surfaces the precise next step.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;7749bc44-0367-4d99-985e-dc0465a7b24e&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">Where the user is now        &#8592; real-time events
What they've already done    &#8592; user memory
Where they should be going   &#8592; golden path from knowledge base
                                       &#8595;
         copilot surfaces the single most relevant next step</code></pre></div><h3><strong>Getting Started</strong></h3><p>Integration takes minutes. </p><p><em>See our full docs <a href="https://autoplayai.mintlify.app/sdk/rag-example">here</a>.</em></p><h4><strong>Step 1 &#8212; Add the snippet to your frontend</strong></h4><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;abb2c4df-1251-47e4-b4f3-04e0d425d04b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">import posthog from 'posthog-js'

posthog.init('YOUR_AUTOPLAY_API_KEY', {
    api_host: 'https://us.i.posthog.com',
    person_profiles: 'identified_only',
    session_idle_timeout_seconds: 120,
    loaded: (posthog) =&gt; {
        posthog.identify(posthog.get_distinct_id(), {
            product_id: 'YOUR_AUTOPLAY_PRODUCT_ID',
        });
    },
})
</code></pre></div><p>After login, pass the user&#8217;s email to enable cross-session identity linking:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;7075318c-8d63-4a77-8121-12cc5c12a06b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">posthog.identify(user.id, {
    product_id: 'YOUR_AUTOPLAY_PRODUCT_ID',
    email: user.email,
})
</code></pre></div><h4>Step 2:  Install the SDK</h4><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;shell&quot;,&quot;nodeId&quot;:&quot;bedb7684-8bef-4759-a7b2-f459f4b75e61&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-shell">pip install autoplay-sdk</code></pre></div><p><code>Requires Python 3.10+</code></p><h4><strong>Step 3 &#8212; Receive your first event</strong></h4><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;c29d3c34-6cf1-4c59-862f-403e34a929ce&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">import asyncio
from autoplay_sdk import AsyncConnectorClient

async def main():
    async with AsyncConnectorClient(url=STREAM_URL, token=API_TOKEN) as client:
        client.on_actions(lambda p: print(p.to_text()))
        client.on_summary(lambda p: print(p.summary))
        await client.run()

asyncio.run(main())</code></pre></div><h4><strong>Step 4 &#8212; Wire it to your copilot</strong></h4><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;55f086fb-8639-49e5-8a7c-f182deace208&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">from autoplay_sdk import AsyncConnectorClient, ActionsPayload

async def on_actions(payload: ActionsPayload) -&gt; None:
    suggestion = await your_llm(
        system="You are a proactive product copilot. Suggest one helpful next step. Be brief.",
        user=f"## What the user is doing\n{payload.to_text()}",
    )
    if suggestion:
        await push_to_ui(payload.session_id, suggestion)

client = AsyncConnectorClient(url=CONNECTOR_URL, token=API_KEY)
client.on_actions(on_actions)</code></pre></div><p><code>payload.to_text() </code>is embedding-ready &#8212; pass it into your LLM context, upsert it to a vector store, or use it to trigger any downstream action: email, Slack notification, support escalation with full context attached.</p><p>Delivery is available as an SSE stream or push webhook. Both emit the same typed <code>ActionsPayload / SummaryPayload </code>objects.</p><h3><strong>What You Can Build</strong></h3><p><strong>Reduce first-session churn.</strong> Detect exactly where users stall and trigger a contextual nudge before they leave in-app, chatbot, or email.</p><p><strong>Eliminate generic chatbot answers.</strong> Every time a user opens support, your agent already knows what they were doing in the product 30 seconds ago.</p><p><strong>The reboarding play.</strong> User has been on your platform three months but has never opened Analytics. Autoplay detects the gap. Copilot fires: &#8220;Want to see your best-performing posts?&#8221; Analytics adoption. Upgrade trigger.</p><p><strong>Proactive upgrade signals.</strong> User hits a limit and starts exploring pricing &#8212; Autoplay detects the sequence and triggers your sales motion before they bounce.</p><p><strong>Live-context RAG.</strong> Embed ActionsPayload events into your vector store in real time for retrieval grounded in what users actually do &#8212; not just your docs.</p><h4><strong>Join the Private Beta</strong></h4><p>Early access teams get the full SDK, a dedicated connector URL and API token, direct Slack support from us, and early influence on the roadmap.</p><p><strong><a href="https://join.slack.com/share/enQtMTA4OTc0NjAwNDc4OTEtZTRhYTQyOTYyZWQ2NTI1YmM3MmZmODE1MWI1ZTRiZTk4MDIzMGMwNTE3M2JiMjE4ZTQxYjI2NjE5YTA0NmRjYw">&#8594; Sign-up to our private beta access</a></strong></p><p><strong><a href="https://autoplay-ai-dev.slack.com/join/shared_invite/zt-3uslbaddx-UT2Cl18BGiLkeMwJ439ppg#/shared-invite/email">&#8594; Join our Slack workspace</a></strong> &#8212; drop a message in #just-integrated after you add the snippet and we&#8217;ll get your connector set up same day.</p><p><strong>&#8594; Read the <a href="https://autoplayai.mintlify.app/quickstart">docs</a></strong></p><h4><strong>What We Believe</strong></h4><p>The future of AI in software isn&#8217;t a chat widget that waits. It&#8217;s an agent that knows &#8212; what this user is trying to do, where they are right now, what they&#8217;ve done before, and what comes next. Everything we&#8217;ve built &#8212; the session analysis, intent modelling, golden path framework, the pipeline from raw DOM events to LLM-ready context, was pointing here.</p><p>We think this changes the economics of onboarding, support, and activation. If you&#8217;re building AI-powered products and want your agents to actually know what&#8217;s happening, come build with us.</p><h6><em>Built in New York. Questions? sam@autoplay.nyc</em></h6><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[My experience using Claude to book an Airbnb.]]></title><description><![CDATA[Why browser agents can navigate but struggle to browse]]></description><link>https://blog.autoplay.ai/p/my-experience-using-claude-to-book</link><guid isPermaLink="false">https://blog.autoplay.ai/p/my-experience-using-claude-to-book</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Wed, 08 Apr 2026 13:31:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1c780abb-93cd-4d69-a12b-f28e8db4ca34_1200x675.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I tried using Claude&#8217;s browser copilot to book an Airbnb for the summer. I gave it my dates, budget, area, and all the usual criteria. It pulled options, filtered, summarized tradeoffs. Then, just to be sure of the results, I went back to Airbnb, and did it myself. My apartments were noticeably better, and what took me nearly an hour with Claude, I was able to do myself in half the time.</p><p>The agent&#8217;s problem wasn&#8217;t reasoning, or understanding my prompts. It was that it used Airbnb badly. It treated the search like a structured query: enter constraints, get results, pick the best one. What I actually did looked nothing like that. I was moving around the map, zooming in and out, opening listings, closing them. I&#8217;d look at three places on the same street and decide the street felt off, shift the map slightly, start again. Photos did most of the work. You can tell fast if a place will feel cramped, if the light is bad, if it&#8217;s one of those apartments that looks good in the first shot and gets worse after. None of that is in a filter.</p><p>There&#8217;s also a constant re-ranking happening that you&#8217;re not conscious of. You see something slightly better and everything before it drops. You revisit a listing you&#8217;d dismissed and it moves up because the alternatives got worse. The agent had no access to any of that.</p><p>To know whether the agent did a bad job, you have to do the task yourself, at which point you&#8217;ve already done the job. You could give it better instructions, and send it back. But after 45 minutes of watching it search, who wants to evaluate its results, write new prompts, and send it through the whole process again? You either trust what it came back with, or it&#8217;s faster to do it yourself.</p><p>My session replay, every click, every listing opened and closed, every time I nudged the map, is a complete record of how the preference formed. It&#8217;s not noise, but a decision taking shape in real time, and it&#8217;s a better benchmark for evaluating the agent than checking whether it completed the booking. The question is whether it converges on the same kind of place and explores the same parts of the map. The ground truth is the process, not the outcome.</p><p>Most agent evaluation misses this. We keep asking &#8220;did it finish the task&#8221; when the more useful question is &#8220;did it make the same moves a good human would make.&#8221;</p><p>The Airbnb UI is built for humans. Visual, spatial, optimized for comparison. It evolved to match how people actually process and filter information. We moved from databases to spreadsheets to SaaS interfaces because each step was more legible to the way humans think.</p><p>For an agent, all of that is overhead. It doesn&#8217;t need a map. It doesn&#8217;t parse photos the way we do. What it actually wants is closer to a spreadsheet, 800 rows, clean columns, every listing queryable. The agent could reason over it directly without navigating a UI built for someone else.</p><p>Are we regressing? Is the right interface for an agentic world just a database with an API? Run a filter algorithm, return ranked results, done. That&#8217;s not a new idea, it&#8217;s basically what SaaS was before anyone put a visual layer on top of it.</p><p>The UI isn&#8217;t just a display layer for humans. It&#8217;s where intent forms. I didn&#8217;t know exactly what I wanted when I started; browsing was how I figured it out. The visual, spatial, comparative experience of Airbnb isn&#8217;t a nice-to-have on top of the data. It&#8217;s the process by which a vague preference becomes a specific one. Strip it out and you haven&#8217;t made the problem simpler, you&#8217;ve just removed the mechanism by which the goal gets defined.</p><p>The agent working from a spreadsheet would be faster and more systematic and would still pick worse apartments, because it would be optimizing against a goal that was never properly formed in the first place.</p><p>So where does that leave browser agents? The question isn&#8217;t whether they&#8217;ll replace humans doing this kind of task. It&#8217;s whether they can participate in the loop that makes the task work. The one where preferences shift in real time based on what you&#8217;re seeing, where the UI is doing cognitive work, not just displaying results.</p><p>What changes this isn&#8217;t better reasoning or better prompts. It&#8217;s agents that read UI behavior as signal: which listings got attention, which got skipped, what got revisited, and what got dropped the moment something better appeared, and use that to infer intent in real-time instead of waiting to be told what to optimize for. The UI doesn&#8217;t become redundant. It becomes the data source. The session becomes the instruction set.</p><p>That&#8217;s a different architecture than most browser agents are built on. And it&#8217;s why an Airbnb test is more interesting than it looks.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/my-experience-using-claude-to-book?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/my-experience-using-claude-to-book?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Cockpit Doesn’t Teach You. It Shows You.]]></title><description><![CDATA[The first time you sit in a cockpit, you realize no one is trying to comfort you.]]></description><link>https://blog.autoplay.ai/p/the-cockpit-doesnt-teach-you-it-shows</link><guid isPermaLink="false">https://blog.autoplay.ai/p/the-cockpit-doesnt-teach-you-it-shows</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Fri, 03 Apr 2026 13:04:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qANm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The first time you sit in a cockpit, you realize no one is trying to comfort you. The cockpit isn&#8217;t designed to be friendly; it&#8217;s designed to be true. Lights flicker, needles shift, alarms whisper - but each one means something specific. It doesn&#8217;t explain flying. It tells you what&#8217;s happening, moment by moment, in a language that doesn&#8217;t lie.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qANm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qANm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!qANm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!qANm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!qANm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qANm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png" width="546" height="364.125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:546,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qANm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!qANm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!qANm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!qANm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca386de0-93c5-4e91-9b3a-010ea43696ae_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p style="text-align: justify;">Most software, on the other hand, behaves like a well-meaning teacher. We smooth the edges, add a bit of animation, pare back the clutter, and assume that if the UI is clean enough, people will figure it out. For simple products, they do. You open a form, click a button, and the software behaves as we promised. The illusion of clarity feels real - until the product stops being simple.</p><p style="text-align: justify;">Once a product becomes a real system (multiple roles, tangled permissions, messy data, exceptions, &#8220;it depends&#8221; logic), the user isn&#8217;t just navigating a product anymore. They&#8217;re trying to complete a workflow inside a living environment, with all its quirks and resistances.</p><p style="text-align: justify;">That&#8217;s why so many AI copilots feel brilliant in demos and disappointing in real life. They can speak. They can explain. They can even suggest steps. But they never feel like they&#8217;re with you inside the product. They feel like they&#8217;re standing outside the room, talking through the door, while you&#8217;re inside, trying to keep control of something that&#8217;s already moving.</p><p style="text-align: justify;">Most software does the opposite: it gives you options before certainty. It makes you hunt for context, then punishes you when you pick the wrong path. Add AI on top, and you don&#8217;t fix the problem. You magnify it - because the weakness isn&#8217;t the assistant&#8217;s language. It&#8217;s its blindness to state.</p><p style="text-align: justify;">The UI isn&#8217;t going away. It&#8217;s just going to stop being uniform. People love saying, &#8220;Agents will replace UIs.&#8221; But I don&#8217;t think the UI disappears. I think it stops being static. The endgame isn&#8217;t a blank screen where you type commands. It&#8217;s an interface that reshapes itself around the moment: what you&#8217;re trying to do, what you&#8217;ve already done, what your role allows, what your team usually does, and what tends to go wrong at this step.</p><p style="text-align: justify;">That&#8217;s why UX is more important than ever. It&#8217;s no longer just about aesthetics. It&#8217;s about the product&#8217;s ability to keep you moving when reality gets messy. And in any real system, reality will always be messy.</p><p style="text-align: justify;">Here&#8217;s why copilots keep getting complex products wrong: in rich, evolving software, there is no single &#8220;correct workflow.&#8221; There are workflows, plural. One person&#8217;s happy path is another person&#8217;s exception. One team&#8217;s process is another team&#8217;s workaround. Power users don&#8217;t follow the docs. They discover patterns and combinations that the documentation never imagined.</p><p style="text-align: justify;">So when a copilot is trained only on text (help center pages, macros, old docs, forum posts), it makes a quiet, dangerous mistake: it confuses &#8220;what&#8217;s written down&#8221; with &#8220;what&#8217;s possible.&#8221; That&#8217;s where you hear the false negative: &#8220;It can&#8217;t be done.&#8221; Not because the product can&#8217;t do it, but because the assistant can&#8217;t see the product. It can&#8217;t see your permissions, your UI state, the earlier steps you&#8217;ve already taken, or what just failed. It&#8217;s reasoning with a blindfold on.</p><p style="text-align: justify;">And when you build &#8220;agentic automation&#8221; on top of that, the problem becomes personal. Now the agent is doing things that you, the user, have to debug inside a system you may not fully understand.</p><p style="text-align: justify;">If copilots are going to work inside real products, they need to be grounded more like a cockpit than a manual. They need to quietly, constantly, and in real time answer questions like: What is the user trying to accomplish right now? Where are they in the workflow? What&#8217;s on screen and available to them? What state is the system in? What just changed? Are they progressing, exploring, or stuck?</p><p style="text-align: justify;">Everyone rushes to the chat UI because it&#8217;s visible. But chat is only useful when it sits on top of product understanding. Otherwise, it&#8217;s just a very polite search box, pretending to be a companion.</p><p style="text-align: justify;">For years, &#8220;good software&#8221; meant fewer clicks. The next era is about fewer moments of uncertainty. The most valuable products won&#8217;t be the ones that answer questions faster. They&#8217;ll be the ones that reduce the need to ask in the first place - by recognizing intent, tracking where you are in the workflow, and surfacing the next meaningful step at the moment it matters.</p><p style="text-align: justify;">The interface becomes personal, not because it&#8217;s pretty, but because it&#8217;s aware. Aware of your context, your history, your role, and your confusion. And that&#8217;s the real shift: not conversation replacing clicks, but software finally meeting users where they are - instead of where the docs assumed they should be.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/the-cockpit-doesnt-teach-you-it-shows/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/the-cockpit-doesnt-teach-you-it-shows/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[A week in Casablanca with Leyton and Leyton CognitX.]]></title><description><![CDATA[We went there with a clear goal.]]></description><link>https://blog.autoplay.ai/p/a-week-in-casablanca-with-leyton</link><guid isPermaLink="false">https://blog.autoplay.ai/p/a-week-in-casablanca-with-leyton</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Fri, 27 Mar 2026 11:30:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/99e9ea38-68c2-43f1-a783-5deb237a2a56_1333x1333.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We went there with a clear goal. Ship agents inside real internal tools. Tools people rely on every day, where workflows are messy and mistakes have consequences.</p><p>What stood out quickly was how the <a href="https://www.linkedin.com/company/leyton/">Leyton</a> and <a href="https://www.linkedin.com/company/leytoncognitx/">Leyton CognitX</a> teams operate. They are not trying to fit their business into SaaS tools. They build internal systems that match how they work, then start replacing parts of those workflows with agents.</p><p>This is a pattern I keep seeing. Strong teams move toward control. They want systems that reflect their workflows, not tools they need to work around. Once that is in place, the next step is obvious. Automate parts of the workflow.</p><p>The interesting part is where things break.</p><p>Building an agent is no longer the hard part. You can get something that looks good in isolation without much effort. The issue shows up when the agent runs inside a real product.</p><p>Internal tools are not clean environments. Permissions change. UI changes based on role and state. Data is inconsistent. There is rarely one correct path through a workflow. What looks like an edge case in theory happens all the time in practice.</p><p>So you end up with agents that can reason, but still make bad decisions. Not because they are incapable, but because they are missing context.</p><p>We kept coming back to a small set of questions. If an agent cannot answer these, it cannot be reliable.</p><p>What is the user trying to do.</p><p>Where are they in the workflow.</p><p>What is on the screen right now.</p><p>What state is the system in.</p><p>Is the user stuck or progressing.</p><p>Without this, the agent guesses. Sometimes it gets it right. Often it does not.</p><p>This is where <a href="https://www.linkedin.com/company/autoplay-ai/">Autoplay</a> fits.</p><p>We focus on connecting behavior to intent and workflow context in real time. Mapping actions to tasks. Tracking how users move through workflows. Understanding what is happening on the screen during execution. Detecting when someone is hesitating or drifting.</p><p>That changes how an agent behaves. It stops reacting to isolated inputs and starts responding to what is happening in the product.</p><p>The week mattered because the conversations stayed grounded. Everyone focused on what needs to be true for an agent to work in production, inside a tool people use all day.</p><p>Internal tools make this possible. You control the workflow. You control the UI. You control how the system is instrumented. That gives you the foundation to build something reliable.</p><p>The takeaway for me is simple.</p><p>Agents do not fail because they lack capability. They fail because they lack context.</p><p>The next phase of agents will be defined by how well they understand the product they operate in. UI state, workflow position, user intent.</p><p>That is what makes the difference between something that works in a demo and something people trust in their day to day work.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/a-week-in-casablanca-with-leyton?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/a-week-in-casablanca-with-leyton?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Will open-source software replace SaaS?]]></title><description><![CDATA[and what we can learn from meal delivery services]]></description><link>https://blog.autoplay.ai/p/will-open-source-software-replace</link><guid isPermaLink="false">https://blog.autoplay.ai/p/will-open-source-software-replace</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Wed, 04 Mar 2026 17:06:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Jqv5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Jqv5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Jqv5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 424w, https://substackcdn.com/image/fetch/$s_!Jqv5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 848w, https://substackcdn.com/image/fetch/$s_!Jqv5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 1272w, https://substackcdn.com/image/fetch/$s_!Jqv5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Jqv5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png" width="1200" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Jqv5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 424w, https://substackcdn.com/image/fetch/$s_!Jqv5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 848w, https://substackcdn.com/image/fetch/$s_!Jqv5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 1272w, https://substackcdn.com/image/fetch/$s_!Jqv5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6d8f93-c190-471b-b8da-c825a2fedebc_1200x800.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><strong>SAP rival Odoo, is currently valued at $5.3 billion</strong></em></p><p>Over the last decade SaaS became the dominant model for enterprise software. Every workflow became a subscription: CRM, HR, analytics, marketing, support. Entire companies built around selling access to a UI.</p><p>A combination of open-source software, AI agents, and better infrastructure is starting to challenge that model. Instead of buying software, companies are increasingly asking a different question:</p><p><em><strong>Should we just build this ourselves?</strong></em></p><p>One of our customers has done exactly this.</p><p>They replaced Salesforce with TwentyCRM. Their entire HR and ERP stack runs through Odoo. And their long-term plan is pretty simple: build most things internally using open-source software.</p><p>For years, building internal software was expensive, slow, and fragile. You needed a team of engineers just to recreate something Salesforce or HubSpot already did.</p><p>Now that assumption is breaking down.</p><p>And the data actually supports this shift. Today <strong>83% of companies already use custom-built software somewhere in their operations</strong>, dashboards, internal workflows, automations, integrations , even though most still rely on off-the-shelf tools for standard functions.</p><p>At the same time, the <strong>global market </strong>for custom software development is roughly<strong> $43B today </strong>and projected to reach about<strong> $146B </strong>by<strong> 2030</strong>.</p><p>That kind of growth doesn&#8217;t happen unless companies are increasingly deciding to build more software themselves, and you can start to see this shift in the tools they&#8217;re choosing.</p><h3><strong><br>One of the clearest examples is open-source business software, especially platforms like Odoo.</strong></h3><p>Odoo has quietly become one of the fastest-growing ERP platforms in the world. What makes it interesting is not just that it&#8217;s open source , is that it behaves more like a software framework than a product.</p><p>You get accounting, CRM, invoicing, inventory, support, etc.</p><p>But the real point is that companies can modify it endlessly. Instead of paying tens or hundreds of thousands a year for multiple SaaS tools, teams can start with Odoo and customize the workflows they actually need.</p><p>The second signal is companies aggressively reducing their SaaS footprint.</p><p>A widely discussed example is <strong>Klarna</strong>. During its internal AI transformation the company removed <strong>over 1,200 external SaaS tools</strong> while automating large parts of internal workflows.</p><p>At the same time, Klarna deployed an AI assistant that handled <strong>2.3 million customer conversations in its first month</strong>, doing the work of roughly <strong>700 support agents</strong> while cutting resolution time from 11 minutes to 2 minutes.</p><p>The point here isn&#8217;t that AI replaces humans.</p><p>The interesting thing is that AI replaces software interfaces.</p><p>Instead of humans clicking through tools, agents interact directly with APIs and systems, orchestrating workflows across multiple products simultaneously.</p><p>When that happens, the UI layer, the thing SaaS companies sell, becomes less important.</p><p>But the easiest way I&#8217;ve found to think about this shift is actually <strong>food</strong>.</p><p>The food world has three basic models</p><ul><li><p>Restaurants</p></li><li><p>Meal kits</p></li><li><p>Grocery stores.</p></li></ul><p>Each solves the same problem (feeding people) but in completely different ways.</p><p>And oddly enough, these map almost perfectly to how software is evolving.</p><h4><strong><br>1. Restaurants (Traditional SaaS)</strong></h4><p>Restaurants are the fully packaged experience.</p><p>You walk in, someone else cooks the meal, someone else designs the menu, someone else handles the logistics. Your only job is to consume the result.</p><p>Uber Eats and delivery apps pushed this even further. Now you can outsource cooking entirely.</p><p>This is basically what SaaS did for software. SaaS says: don&#8217;t worry about infrastructure, workflows, hosting, updates, engineering;  we&#8217;ve already built the system. Just log in and use it.</p><p>Salesforce, HubSpot, Zendesk,  these are restaurants.</p><p>And this made sense. Running your own software was like running your own restaurant kitchen,  expensive and complicated.  And in many cases, this is still the right choice.  If something is truly best-in-class, it&#8217;s often cheaper to buy the experience rather than build it.</p><h4><strong><br>2. Meal delivery kits (Open-source platforms)</strong></h4><p>Then meal delivery services appeared: HelloFresh, Blue Apron, etc.</p><p>They changed the model.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Szy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Szy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 424w, https://substackcdn.com/image/fetch/$s_!9Szy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 848w, https://substackcdn.com/image/fetch/$s_!9Szy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 1272w, https://substackcdn.com/image/fetch/$s_!9Szy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Szy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png" width="468" height="414.76190476190476" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:335,&quot;width&quot;:378,&quot;resizeWidth&quot;:468,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Szy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 424w, https://substackcdn.com/image/fetch/$s_!9Szy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 848w, https://substackcdn.com/image/fetch/$s_!9Szy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 1272w, https://substackcdn.com/image/fetch/$s_!9Szy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65264a27-1488-46fe-a963-f3c1d6f83cfc_378x335.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Instead of cooking everything for you, they give you ingredients + recipes.</p><p>You still cook the meal, but the hard part, planning and sourcing,  is done.</p><p>The model clearly resonated. The <strong>global meal-kit market</strong> is roughly <strong>$25&#8211;33B </strong>today and projected to exceed <strong>$70B </strong>over the next decade. In the U.S. alone it was about <strong>$10.4B </strong>in <strong>2023</strong> and is expected to more than <strong>double by 2030.</strong></p><p>That&#8217;s what open-source platforms like Odoo or <strong>Twenty CRM</strong> represent.</p><p>They give you the structure of enterprise software, but let you modify it however you want.</p><p>You aren&#8217;t locked into someone else&#8217;s workflow.</p><h4><strong><br>3. Grocery shopping (build-it-yourself software)</strong></h4><p>Finally there&#8217;s grocery shopping.</p><p>You buy raw ingredients and cook everything yourself.</p><p>This used to be the equivalent of building internal tools from scratch, something only large tech companies could afford.</p><p>But AI agents are starting to change that too.</p><p>Agents can now write code, connect APIs, automate workflows, and operate software directly.</p><p>In other words:</p><p><strong>AI is making building your own software dramatically easier.</strong></p><h3><strong><br>The psychology behind these models is actually quite predictable.</strong></h3><p>Most people don&#8217;t cook every meal themselves, not because they can&#8217;t, but because of time, expertise, and convenience.</p><p>Restaurants win when people want simplicity.</p><p>Meal kits win when people want control but not complexity.</p><p>Cooking wins when people want full customization.</p><p>Software markets are starting to follow the same pattern.</p><p>For years SaaS dominated because it was the easiest option.</p><p>But as customization becomes easier (and cheaper) the balance starts shifting.</p><p>Companies are realizing something important:</p><p>They don&#8217;t actually want generic workflows.</p><p>They want software that reflects <strong>how their business actually works</strong>.</p><h3><strong><br>AI agents accelerate this shift even further.</strong></h3><p>Instead of humans navigating software, agents can execute workflows directly across systems, retrieving data, triggering actions, and coordinating tasks.</p><p>This changes the economics of software.</p><p>Traditional SaaS pricing is built on per-seat subscriptions.</p><p>But agents don&#8217;t buy seats.</p><p>They operate systems.</p><p>At the same time, even SaaS companies are moving toward agent-driven architectures. For example <strong>Salesforce</strong> now runs autonomous agents through its Agentforce platform, and its CEO has said AI already performs <strong>30&#8211;50% of internal work inside the company</strong>.</p><p>That alone tells you something about where things are going.</p><h3><strong><br>So where does this leave SaaS?</strong></h3><p>Probably not dead.</p><p>Restaurants still exist. They&#8217;re just no longer the only way people eat.</p><p>What seems more likely is a shift in the software stack:</p><p><strong>&#8226; Open-source platforms become the base layer</strong></p><p><strong>&#8226; AI agents orchestrate workflows on top</strong></p><p><strong>&#8226; SaaS products become components rather than destinations</strong></p><p>Instead of logging into five different tools, companies will have systems that assemble themselves around their workflows.</p><p>The UI becomes optional.</p><p>The agent becomes the interface.</p><p>And software stops looking like products, and starts looking more like infrastructure.</p><h3><strong><br>One last thought.</strong></h3><p>The SaaS era optimized for <strong>distribution</strong>.</p><p>The next era might optimize for <strong>adaptability</strong>.</p><p>Software that can reshape itself around a company will always beat software that forces the company to reshape itself around the tool.</p><p>Which is why open source + AI + agents together feel like something bigger than just another technology shift.</p><p>It feels like the moment when <strong>building software becomes easier than buying it</strong>.</p><p>And historically, when that happens, markets change very quickly.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/will-open-source-software-replace?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/will-open-source-software-replace?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[What Building with AI Taught Us About Software]]></title><description><![CDATA[Robustness, Composability, and the Limits of Agents]]></description><link>https://blog.autoplay.ai/p/what-building-with-ai-taught-us-about</link><guid isPermaLink="false">https://blog.autoplay.ai/p/what-building-with-ai-taught-us-about</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Thu, 08 Jan 2026 17:17:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4905320e-f190-43be-84f9-60e7aa357b4b_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We&#8217;ve been wrestling with a question that keeps coming up in every serious AI conversation: will AI replace software? More specifically, will agents replace workflows?</p><p>At first glance, it looks like that future is already here. New AI-native &#8220;replacements&#8221; for SaaS appear every week: AI CRMs, AI recruiting tools, AI design tools, AI everything. But once you actually use them, a pattern emerges: almost all of them lack robustness.</p><p>And robustness is the thing that matters most.</p><p>By robustness, I don&#8217;t mean polish or speed. I mean the ability to support real-world complexity: deep customization, edge cases, multiple stakeholders, evolving use cases, and scale over time. The kind of robustness you see in tools like CAD suites, professional photo-editing software, or enterprise CRMs. These tools work not because they automate everything, but because they let humans shape them endlessly around their specific needs.</p><p>Robustness is what allows software to scale horizontally and vertically. Instead of building a tool that solves one narrow problem extremely well, you build a system that can be adapted to many problems, and then further adapted within each of those problems. That&#8217;s why one piece of software can power thousands of companies, each using it in slightly (or wildly) different ways.</p><p>VCs understand this instinctively. That&#8217;s why, over the past decade, they&#8217;ve bet heavily on vertical SaaS; software that deeply understands a domain but still preserves flexibility. What we&#8217;re seeing now is the same bet being made again, just rebranded: vertical agents instead of vertical SaaS. The hope is that agents will finally be able to encode that flexibility automatically.</p><p>So far, they haven&#8217;t.</p><p>A big contributor to this problem is what people loosely call &#8220;vibe coding.&#8221; Yes, it&#8217;s impressive that you can spin up a prototype, or even a simple production app, in hours instead of months. But there&#8217;s a cost. Everything starts to look the same. The same layouts, the same interactions, the same mental models.</p><p>Vibe-coded software is limited by what the model thinks software should look and feel like. Which means that genuinely clever, opinionated, or novel UI/UX gets flattened into safe, repeatable patterns. That&#8217;s not just a taste issue; it&#8217;s a data issue. The model can only remix what it has seen.</p><p>As a result, most AI products end up differentiated by only two things:</p><ol><li><p>The context data they&#8217;re plugged into</p></li><li><p>Minor UI variations on top of the same underlying interaction model<br></p></li></ol><p>But here&#8217;s the problem: the moment you try to make that software more powerful, e.g. by adding features, supporting edge cases, or enabling deeper customization - you start removing AI from the critical path. You fall back to explicit controls, configuration, logic, and structure. In other words, software.</p><p>The most robust piece of software most people have ever used is probably Excel. At its core, it&#8217;s almost offensively simple. No opinions. No automation. Just cells and functions. Yet it works as both a backend and a frontend. You can model a business, run analytics, build workflows, or create something completely unintended by its creators.</p><p>Excel doesn&#8217;t succeed despite its lack of AI, it succeeds because of its raw, composable nature. AI can sit on top of it, but it&#8217;s not what gives Excel power.</p><p>This leads to an uncomfortable conclusion: a lot of AI today is being used to solve problems that don&#8217;t actually need to be solved.</p><p>Take AI in recruitment. In practice, most &#8220;AI recruiting agents&#8221; are just large language models wrapped around filters and keyword searches. Things that already exist and work very well with Linkedin recruitment or sales navigator. The pitch is that it feels magical the first time, type what you want, get results, but the magic doesn&#8217;t scale. It breaks down the moment you care about nuance, tradeoffs, or consistency.</p><p>They&#8217;re party tricks, not systems.</p><p>In these cases, AI doesn&#8217;t produce better outcomes. It just produces outcomes faster. And speed without robustness is rarely a long-term advantage.</p><p>This distinction matters a lot for how we think about AI at Autoplay.</p><p>Our AI features (golden path deviation, unsupervised clustering, hesitation detection etc.) are intentionally treated like functions. They&#8217;re closer to filters, tags, or functions/columns in a Google Sheet than they are to autonomous agents. Each one does something specific, deterministic, and composable. On their own, they&#8217;re not the product.</p><p>The product is the experience around them.</p><p>Autoplay should feel as robust as any serious analytics or product tool. That means our AI features must work in combination with non-AI features. They need to respect filters, cohorts, URLs, time ranges, and organizational context. They need to be debuggable, inspectable, and controllable.</p><p>Our first real mistake was leaning too hard into a semantic search / ChatGPT-style interface, asking questions like &#8220;Where do users get stuck?&#8221; That sounds powerful, but it&#8217;s fundamentally flawed. Without isolating control factors (specific cohorts, specific flows, specific dates), you don&#8217;t get truth. You get plausible answers.</p><p>And plausible answers are dangerous.</p><p>Real insight comes from constraint, not abstraction. Cohorts. URLs. Time windows. Comparisons. AI can help compute, surface, and suggest, but robustness comes from giving humans the tools to reason precisely.</p><p>The future isn&#8217;t AI replacing software. It&#8217;s AI becoming a primitive inside robust software. A function, not an agent. A building block, not the building.</p><p>And the companies that win won&#8217;t be the ones with the best demo, they&#8217;ll be the ones that understand that robustness is still the hard part.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/what-building-with-ai-taught-us-about/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/what-building-with-ai-taught-us-about/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Reason Co-Pilots Keep Getting Complex Products Wrong]]></title><description><![CDATA[Why Co-Pilots Need Actual UI Understanding, Not Just Good Language Models]]></description><link>https://blog.autoplay.ai/p/the-reason-co-pilots-keep-getting</link><guid isPermaLink="false">https://blog.autoplay.ai/p/the-reason-co-pilots-keep-getting</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Wed, 19 Nov 2025 11:16:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/40286001-f76e-42e9-90a4-9792e599dcf7_3136x1344.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was using PostHog&#8217;s co-pilot recently and noticed something interesting. </p><p>Normally, when AI gets things wrong, it tends to hallucinate, confidently giving you answers that are completely off.</p><p>But this time, it wasn&#8217;t hallucinating. It was doing the opposite. It kept telling me what workflows &#8220;weren&#8217;t possible.&#8221;</p><p>Not wrong answers. Not made-up instructions. Just false negatives; the AI confidently claiming something <em>couldn&#8217;t</em> be done, even though it absolutely could.</p><p>This revealed a new problem: the co-pilot wasn&#8217;t actually reasoning about the product. It was reasoning about the documentation.</p><p>Co-pilots rely almost entirely on knowledge bases - documentation, forums, API pages, support articles - and those don&#8217;t always reflect how the products work.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rsRa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rsRa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 424w, https://substackcdn.com/image/fetch/$s_!rsRa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 848w, https://substackcdn.com/image/fetch/$s_!rsRa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 1272w, https://substackcdn.com/image/fetch/$s_!rsRa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rsRa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png" width="606" height="544.2871452420701" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1076,&quot;width&quot;:1198,&quot;resizeWidth&quot;:606,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rsRa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 424w, https://substackcdn.com/image/fetch/$s_!rsRa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 848w, https://substackcdn.com/image/fetch/$s_!rsRa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 1272w, https://substackcdn.com/image/fetch/$s_!rsRa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea748d0d-b86a-40b4-991e-652cacf5aa23_1198x1076.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This isn&#8217;t specific to PostHog.</p><p>It happens in every complex platform I&#8217;ve tried using a co-pilot with.</p><p>And the more I run into this, the more obvious the gap becomes:</p><p><strong>co-pilots don&#8217;t understand the UI, the workflow, or the situation you&#8217;re actually in.</strong></p><p>They understand text.</p><p>Here&#8217;s the breakdown of where things fall apart.</p><h3><strong>Co-pilots rely on documentation and forums, not the real product</strong></h3><p>Documentation tells you the &#8220;official&#8221; way to use a product.</p><p>Power users rely on workarounds, hacks, and discovering interesting combinations of features and workflows to achieve their own unique goals.</p><p>Docs rarely cover these.</p><p>So the co-pilot gives answers that are correct in theory, but too restricted to work in practice.</p><p>In my PostHog example, the co-pilot repeatedly said the workflow I wanted was &#8220;not yet possible&#8221; because the documentation didn&#8217;t describe that workflow.</p><p>I later explained the problem to a PostHog engineer, and he proposed a completely different approach.</p><p>Because he understood the actual goal (not the literal workflow I had described), he could see paths I didn&#8217;t know existed.</p><p>He wasn&#8217;t confined to the questions I asked. He wasn&#8217;t limited by my understanding of the feature set. He wasn&#8217;t restricted to what&#8217;s written down. He could reason across the product, not just answer inside it.</p><p>And when I asked the co-pilot how to follow the engineer&#8217;s instructions, it had no problem explaining the steps.</p><p>That&#8217;s the fundamental gap:</p><p>Co-pilots answer the prompt, humans answer the intent.</p><p>If it&#8217;s not written down, the co-pilot assumes it doesn&#8217;t exist.</p><h3><strong>Complex platforms can do far more than what&#8217;s written</strong></h3><p>Everyone who uses tools like PostHog, Datadog, HubSpot, Notion, Zapier, or Retool knows this:</p><p>There are a hundred things the product <em>can</em> do that are never officially documented.</p><p>You learn them by clicking around, testing settings, seeing how things behave under certain conditions, chaining features together, manipulating existing flows, and watching how the UI reacts.</p><p>This is the type of knowledge humans discover naturally, but co-pilots can&#8217;t access at all.</p><p>They don&#8217;t explore the UI.  They don&#8217;t test. They don&#8217;t observe behaviour. They don&#8217;t learn from interaction.</p><p>Which means they fundamentally misunderstand how these products are actually used.</p><h3><strong>You can&#8217;t problem-solve with a UI you don&#8217;t understand</strong></h3><p>Documentation doesn&#8217;t always get updated. Old support articles don&#8217;t get replaced. Forum posts contradict each other.</p><p>The co-pilot is stuck in a snapshot of how the product <em>used to</em> work.</p><p>The deeper issue isn&#8217;t just that documentation gets stale,  it&#8217;s that a co-pilot relying on static text can&#8217;t reason about what the product <em>can</em> do, only what the docs <em>say</em> it can do.</p><p>It has no ability to explore the UI, test different orders of steps, combine features, or try alternative paths to reach the same goal. So if the documentation only describes one &#8220;official&#8221; workflow, the co-pilot assumes that&#8217;s the only workflow that exists.</p><p>Humans do this instinctively. We try things. We click around. We chain features together. We change the order. We combine steps. We see what happens.</p><p>If you give someone a list of ingredients and one recipe, they&#8217;ll only ever make that one dish.</p><p>A real cook sees the same ingredients and immediately knows ten things they can make.</p><h3><strong>Full automation makes the problem worse</strong></h3><p>If the agent fully automates a workflow, and something is wrong (and something <em>will</em> be wrong) the user ends up needing to debug the AI&#8217;s mistake without understanding the underlying system.</p><p>This is especially painful when:</p><ul><li><p>the account has custom configurations</p></li><li><p>the user has special permissions</p></li><li><p>the data is messy</p></li><li><p>the UI behaves differently for that workspace</p></li><li><p>the workflow has branching logic</p></li></ul><p>If something breaks, the user is blind.</p><p>This is the opposite of &#8220;help.&#8221;</p><h3><strong>Users don&#8217;t want to re-explain everything every time</strong></h3><p>Another practical issue:</p><p>If you repeat the workflow a week later, you don&#8217;t want to:</p><ul><li><p>rewrite the entire prompt</p></li><li><p>specify every detail</p></li><li><p>describe your setup again</p></li><li><p>define the data sources</p></li><li><p>re-explain the exact workflow</p></li></ul><p>You want the co-pilot to:</p><ul><li><p>know your context</p></li><li><p>remember your configuration</p></li><li><p>understand your workspace</p></li><li><p>and help you perform the next step</p></li><li><p>not restart from scratch.</p></li></ul><p>This is where UI understanding matters.</p><p>The context lives in the interface, not the prompt.</p><h3><strong>Co-pilots need to assist, not take over</strong></h3><p>Most people don&#8217;t want the AI to run off and complete everything.</p><p>They want:</p><ul><li><p>suggestions</p></li><li><p>guardrails</p></li><li><p>UI highlighting</p></li><li><p>&#8220;this is where you went wrong&#8221;</p></li><li><p>&#8220;this is the next step&#8221;</p></li><li><p>partial automation</p></li><li><p>shared control</p></li></ul><p>Co-pilots should help with actions,  not replace them entirely.</p><h3><strong>The real solution: let the co-pilot learn by interacting with the UI</strong></h3><p>The only way around all these problems is for co-pilots to understand software the way humans do - by using it.</p><p>Something closer to OpenAI&#8217;s Operator-style agents that:</p><ul><li><p>click through the interface</p></li><li><p>test actions</p></li><li><p>inspect elements</p></li><li><p>understand actual UI states</p></li><li><p>watch how the product behaves</p></li><li><p>identify errors visually</p></li><li><p>confirm steps through interaction</p></li></ul><p>This would give the co-pilot access to <em>real</em> product knowledge, not second-hand descriptions.</p><p>The next generation of co-pilots won&#8217;t win on bigger models, they&#8217;ll win on better product understanding.</p><p>Until then, we&#8217;ll keep running into the same thing:</p><p><strong>co-pilots that sound right but don&#8217;t actually help.<br></strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/the-reason-co-pilots-keep-getting?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/the-reason-co-pilots-keep-getting?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Journey from Sales-Led to Self-Serve]]></title><description><![CDATA[When Humans Do What the Product Should]]></description><link>https://blog.autoplay.ai/p/the-journey-from-sales-led-to-self</link><guid isPermaLink="false">https://blog.autoplay.ai/p/the-journey-from-sales-led-to-self</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Wed, 05 Nov 2025 14:37:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!b9tO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b9tO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b9tO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 424w, https://substackcdn.com/image/fetch/$s_!b9tO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 848w, https://substackcdn.com/image/fetch/$s_!b9tO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 1272w, https://substackcdn.com/image/fetch/$s_!b9tO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b9tO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png" width="1456" height="842" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:842,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b9tO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 424w, https://substackcdn.com/image/fetch/$s_!b9tO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 848w, https://substackcdn.com/image/fetch/$s_!b9tO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 1272w, https://substackcdn.com/image/fetch/$s_!b9tO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d0bc25b-43df-4b5f-a78f-39e74f27e70a_1600x925.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Every SaaS software company says they want their product to be more self-serve.</p><p>But inside most sales-led organisations, adoption still depends on people, account managers retaining users, success teams explaining workflows, and product managers wondering why users don&#8217;t find value without help.</p><h2><strong><br></strong>The Hidden Comfort of Being Sales-Led</h2><p>Why is there so much push back to becoming completely product-led?</p><p>Because great salespeople and CSMs can hide almost any flaw in the product. Customers never truly feel friction, they just message someone and the problem disappears.</p><p>Until it doesn&#8217;t scale.</p><p>At some point, you start hiring people not to grow the business, but to explain the product. You start mistaking hand-holding for success.</p><p>When that happens, humans end up doing what the product should be doing.</p><ul><li><p><strong>CSMs </strong>teach customers where buttons are.</p></li><li><p><strong>Sales reps</strong> re-sell the same value every quarter because adoption never sticks.</p></li><li><p><strong>Product teams</strong> build from second-hand feedback and miss the real pain.</p></li></ul><p>Everyone says, <em>&#8220;We just need to improve onboarding.&#8221;</em></p><p>But onboarding isn&#8217;t the problem.</p><p>Dependence is.</p><h2><strong><br></strong>A Product That Teaches Itself</h2><p>A truly self-serve product teaches itself.</p><ul><li><p>It adapts when new users join a team.</p></li><li><p>It re-boards when features change.</p></li><li><p>It knows when someone&#8217;s stuck, and helps before they ask.</p></li></ul><p>That&#8217;s the real test. Not whether users can &#8220;figure it out,&#8221; but whether the product can recover from confusion.</p><p>Customer success should be about helping users grow their accounts, not teaching them to click the right buttons.</p><p>If your CSMs are doing anything other than that, your product still has training wheels.</p><h2><strong><br></strong>The Over-Correction Trap</h2><p>A user gets stuck &#8594; you add a tooltip.</p><p>Someone complains &#8594; you add another setting.</p><p>A big customer asks for something &#8594; you build a feature.</p><p>Pretty soon you&#8217;ve solved for every edge case and made the core experience worse for everyone else.</p><p>Every product has a group of users who will always be confused. The trick is knowing when that&#8217;s a product problem and when it&#8217;s a growth problem.</p><p>If 80% of users are fine and 20% aren&#8217;t - that&#8217;s not bad UX, that&#8217;s an opportunity for education or automation.</p><p>If 80% are struggling, that&#8217;s a product issue.</p><p>Most companies never make that distinction. They just throw more people at the problem.</p><h2><strong><br></strong>Why Product Signals Aren&#8217;t Enough</h2><p>Funnels and events tell you <em>what</em> happened, not <em>why</em>.</p><p>A drop-off isn&#8217;t an insight, it&#8217;s a symptom.</p><p>What matters is <strong>intent &#8211;</strong> what the user was trying to do.</p><p>Because only then can you decide who owns the problem:</p><ul><li><p><strong>Product</strong>, if the path itself is broken.</p></li><li><p><strong>Growth</strong>, if users don&#8217;t know the path exists.</p></li><li><p><strong>CS</strong>, if they know it but can&#8217;t connect it to outcomes.</p></li></ul><p>Without that context, teams optimise blind, fixing symptoms and wondering why nothing compounds.</p><h2><strong><br></strong>From Sales-Led to Self-Serve: Where to Start</h2><h4>1. Self-Serve Starts Inside Your Sales-Led Motion</h4><p>Sales-led growth doesn&#8217;t disappear when you go self-serve, it evolves.</p><p>The best teams aren&#8217;t removing humans; they&#8217;re automating the moments around them:</p><ul><li><p>Onboarding emails triggered by real behaviour, not fixed sequences.</p></li><li><p>Guidance personalised by intent, not generic feature tours.</p></li><li><p>Friction detection before it turns into a support ticket.</p></li></ul><h4><br>2. The Hidden Cost of Re-Onboarding</h4><p>Most companies nail the first 30 days. Calls, walkthroughs, check-ins.</p><p>But after that, things decay.</p><p>Teams change. The product evolves. New users join existing accounts.</p><p>The &#8220;how-to&#8221; knowledge lives in outdated Looms, slack threads and internal SOPs.</p><p>It&#8217;s not an onboarding problem, it&#8217;s a <strong>re-boarding</strong> problem.</p><p>CSMs become the product&#8217;s memory, constantly retraining users on what&#8217;s changed.</p><p>Self-serve companies solve this by detecting when someone new joins, recognising first-time patterns, and automatically guiding them to value.</p><p>Consistency shouldn&#8217;t depend on who&#8217;s still in the Slack channel.</p><p></p><h4>3. User Goals Change, Your Context Should Too</h4><p>Customer intent evolves.</p><p>The reason someone bought six months ago isn&#8217;t always why they use it today.</p><p>Without visibility into that shift, teams end up:</p><ul><li><p>Creating their own SOPs to make the product &#8220;fit.&#8221;</p></li><li><p>Misaligning Sales, CS, and Product around outdated goals.</p></li><li><p>Chasing the wrong problems because they&#8217;re anchored to old intent.</p></li></ul><p>You can&#8217;t fix that with static CRM notes or funnel data.</p><p>You need to interpret what users are <em>trying</em> to do in real time, and adjust what you show or automate accordingly.</p><p>The best teams personalise not by role or plan, but by intent.</p><h4><strong><br></strong>4. Product Signals + Sales Signals = Expansion</h4><p>Sales-led teams are wired to watch the big picture, buying signals like company growth, new hires, expansion plans, or renewed budgets.</p><p>When you optimise around revenue metrics alone (logins, seat counts, contract size) you miss the product-side intent: understanding what users are trying to do before those signals ever show up.</p><p>A user might look &#8220;healthy&#8221; on paper but still be stuck in the workflow.</p><p>Self-serve maturity comes from connecting those two worlds, pairing the external business context with what&#8217;s really happening in-product, where intent-to-buy meets intent-to-use.</p><p></p><h4>5. The Edge-Case Problem</h4><p>Every product has edge cases, users who get 90% there, but not quite.</p><p>Product teams build for the majority.</p><p>CS and Growth teams live in the margins.</p><p>The key is scale:</p><ul><li><p>If 2% of users are stuck, it&#8217;s a growth problem.</p></li><li><p>If 20% are, it&#8217;s a product gap.</p></li></ul><p>You can&#8217;t know which without intent-level data, connecting behaviour, friction, and context in one place.</p><p>Before changing the product, isolate the cause:</p><ol><li><p>Was onboarding insufficient?</p></li><li><p>Were expectations mis-set in Sales?</p></li><li><p>Is it the wrong ICP?</p></li></ol><h4><br>6. What Self-Serve Really Means</h4><p>Self-serve isn&#8217;t about removing humans.</p><p>It&#8217;s about giving them context before they&#8217;re needed.</p><p>The best companies automate understanding, not empathy.</p><p>They:</p><ul><li><p>Detect intent, friction, and change.</p></li><li><p>Surface insights to CS and growth.</p></li><li><p>Automate guidance when it&#8217;s routine, and flag it when it&#8217;s strategic.</p></li></ul><p>When that happens, humans finally get to do what they&#8217;re best at, helping customers grow, not teaching them to click.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/the-journey-from-sales-led-to-self?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/the-journey-from-sales-led-to-self?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The rise, fall (and rebirth) of Microsoft Clippy ]]></title><description><![CDATA[A story about timing, technology, and intent.]]></description><link>https://blog.autoplay.ai/p/the-rise-fall-and-rebirth-of-microsoft</link><guid isPermaLink="false">https://blog.autoplay.ai/p/the-rise-fall-and-rebirth-of-microsoft</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Wed, 29 Oct 2025 14:28:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QMLO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software has always wanted to be helpful. In the late 1990s, Microsoft tried to make it friendlier with an animated guide in Office: a paperclip with eyes that popped up when it thought help was needed. It became famous, then infamous, and eventually&#8230; nostalgic.</p><p>It was called <strong>Clippit</strong>, or more famously, <strong>Clippy</strong>. The idea was decades ahead of the technology that powered it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6QIk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6QIk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 424w, https://substackcdn.com/image/fetch/$s_!6QIk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 848w, https://substackcdn.com/image/fetch/$s_!6QIk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 1272w, https://substackcdn.com/image/fetch/$s_!6QIk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6QIk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png" width="149" height="268.905325443787" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:305,&quot;width&quot;:169,&quot;resizeWidth&quot;:149,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6QIk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 424w, https://substackcdn.com/image/fetch/$s_!6QIk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 848w, https://substackcdn.com/image/fetch/$s_!6QIk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 1272w, https://substackcdn.com/image/fetch/$s_!6QIk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9b951d7-266f-4e5c-ac8c-581a277786bf_169x305.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The arc matters because modern products are quietly re-introducing assistants, but with better timing, better intent, and better outcomes.</p><h2><br>Why it started (history)</h2><p>In <strong>1996&#8211;1997</strong>, as Microsoft prepared to ship <strong>Office 97</strong>, it faced a problem: users were overwhelmed by menus, ribbons, and hidden features. The company wanted a &#8220;human&#8221; interface between people and software: a companion that could sense what the user was doing and offer help in real time.</p><p>Clippy was the result. Designed by Kevan J. Atteberry, Clippy was part of the Office Assistant, powered by early Bayesian inference models - primitive probability systems that guessed user intent based on small cues.</p><p>If a user typed &#8220;Dear &#8230;&#8221; &#8594; it likely meant &#8220;writing a letter.&#8221;</p><p>If a user opened the Tools menu repeatedly &#8594; they were probably &#8220;looking for a setting.&#8221;</p><p>Underneath, this was all logic trees and conditional probabilities - not natural language understanding. There were no embeddings, no transformers, no context memory. The assistant was running a static rule base embedded in Office, built on Microsoft&#8217;s Agent platform, a descendant of the Microsoft Bob project from 1995.</p><p>Clippy was a kind of <em>hardcoded agent</em> &#8211; an interface shell around structured rules, not reasoning. It looked alive, but it couldn&#8217;t really think.<br></p><h2><br>Why it failed</h2><p>The failure wasn&#8217;t because the idea was bad - it was because the technology couldn&#8217;t deliver what the interface promised.</p><ul><li><p><strong>No real intent modeling</strong>. Bayesian triggers worked on single-word cues, not multi-step context. If you typed &#8220;Dear&#8230;&#8221; in a poem, Clippy thought it was a letter.<br></p></li><li><p><strong>No personalization</strong>. Clippy had no user memory - every interaction was the same. It couldn&#8217;t learn from you or adapt.<br></p></li><li><p><strong>Intrusive by design</strong>. It appeared automatically, often mid-task, because it lacked the subtlety to know when help was wanted.<br></p></li><li><p><strong>No feedback loop</strong>. It couldn&#8217;t improve or be fine-tuned; the model lived inside your Office install.<br></p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QMLO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QMLO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 424w, https://substackcdn.com/image/fetch/$s_!QMLO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 848w, https://substackcdn.com/image/fetch/$s_!QMLO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 1272w, https://substackcdn.com/image/fetch/$s_!QMLO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QMLO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png" width="222" height="147.89577464788732" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:473,&quot;width&quot;:710,&quot;resizeWidth&quot;:222,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QMLO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 424w, https://substackcdn.com/image/fetch/$s_!QMLO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 848w, https://substackcdn.com/image/fetch/$s_!QMLO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 1272w, https://substackcdn.com/image/fetch/$s_!QMLO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9417e53c-0fbd-4e3d-91e0-b625f2c09556_710x473.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>By 2001, Microsoft disabled it by default in Office XP, and by 2007, it was gone entirely.</strong></p><p>The idea of software that knows when you need help went dormant for almost two decades.<br></p><h2><br>The rebirth &#8211; and why it matters now</h2><p>When <strong>LLMs</strong> emerged, they reintroduced something Clippy always wanted to be: <em>context-aware</em>.</p><p>Assistants like <strong>GitHub Copilot</strong>, <strong>Microsoft Copilot</strong>, and <strong>ChatGPT</strong> are powered by <strong>transformer-based language models</strong> &#8211; systems that don&#8217;t rely on keyword triggers, but on <em>probabilistic reasoning across massive context windows</em>. They can infer what you mean, not just what you type.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qLGq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qLGq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 424w, https://substackcdn.com/image/fetch/$s_!qLGq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 848w, https://substackcdn.com/image/fetch/$s_!qLGq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 1272w, https://substackcdn.com/image/fetch/$s_!qLGq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qLGq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png" width="800" height="447" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:447,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qLGq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 424w, https://substackcdn.com/image/fetch/$s_!qLGq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 848w, https://substackcdn.com/image/fetch/$s_!qLGq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 1272w, https://substackcdn.com/image/fetch/$s_!qLGq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dc2daf0-2d15-49d6-ae14-e48d83ae90c5_800x447.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2025, Microsoft introduced Mico, a new Copilot avatar with expressive, real-time reactions and voice-mode presence - a modern, optional embodiment of help. A hidden Clippy Easter egg nods to the past without repeating it.</p><p>Press framing is explicit: Mico aims to succeed where Clippy failed - more empathetic, less intrusive, and tuned to today&#8217;s expectations for useful, collaborative AI.</p><p></p><h2>How the assistant returns to software (done right)</h2><p>Today&#8217;s copilots are powerful, but still <strong>reactive</strong>. They respond to <strong>language</strong>, not <strong>behavior</strong>.</p><p>If a user hesitates or gets stuck without knowing why, the copilot stays silent until prompted.</p><p>The next step is moving from assistants that <strong>listen</strong> to ones that <strong>notice</strong>.</p><p>Modern assistants should work in the flow and for the outcome. Instead of generic &#8220;Need help?&#8221; balloons, they watch for signals: hesitation, repeat errors, dead-ends, oscillation, or divergence from a known golden path.</p><p>When confidence is high - and only then - they surface a targeted nudge, a micro-guide, or an action.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uBJL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uBJL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 424w, https://substackcdn.com/image/fetch/$s_!uBJL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 848w, https://substackcdn.com/image/fetch/$s_!uBJL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 1272w, https://substackcdn.com/image/fetch/$s_!uBJL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uBJL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png" width="468" height="263.25" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:468,&quot;width&quot;:832,&quot;resizeWidth&quot;:468,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uBJL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 424w, https://substackcdn.com/image/fetch/$s_!uBJL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 848w, https://substackcdn.com/image/fetch/$s_!uBJL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 1272w, https://substackcdn.com/image/fetch/$s_!uBJL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92fadbdb-f8e8-4b3f-ba8a-715cb58a4b7f_832x468.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><br>Where this is heading, and what we&#8217;re exploring at Autoplay</h2><p>At Autoplay, we&#8217;ve been developing Pinnie the Pin Pal. </p><ul><li><p><strong>What it is</strong>. A proactive chat presence that infers intent not from text, but from behavior and context. It pops up only when users may be stuck or deviating from the expected flow.<br></p></li><li><p><strong>Where it appears</strong>. Anywhere friction concentrates: beside a disabled button, near a mis-configured form, inside a multi-step wizard, or inline on a page where users routinely stall.<br></p></li><li><p><strong>What it does</strong>. Offers the next best step, clarifies prerequisites, highlights the one control users keep missing, or links to a short, relevant walkthrough.<br></p></li></ul><p>Pinnie is powered by TERRA, Autoplay&#8217;s framework for predicting user intent by combining UI understanding with user clicks to form a unified ontology for measuring and evaluating user goals. </p><p>It&#8217;s the same vision Microsoft had in 1997 - helping users reach value faster - but finally with the technology to do it right.</p><p></p><h2>Why this is different from Clippy (and better)</h2><ul><li><p><strong>Inference over interruption.</strong> Pinnie uses behavioral signals (deviations, repeated back-and-forth, long hovers, rage-clicks) to predict intent and time help; Clippy relied on shallow triggers (keywords and canned heuristics).<br></p></li><li><p><strong>Context and control.</strong> Pinnie is subtle and optional, appears in place, and respects dismissals and user preferences. Clippy was on by default and hard to tune.<br></p></li><li><p><strong>Outcome-driven.</strong> Pinnie&#8217;s goal is path completion and adoption (finish a setup, export data correctly, publish the first automation), measured by golden-path completion and drop-off changes, not just &#8220;help opened.&#8221;<br></p></li><li><p><strong>Segment-smart.</strong> Behavior adapts by persona and journey stage (new vs. power user; admin vs. contributor), avoiding one-size-fits-all prompts.</p></li></ul><h2><br>Design lessons (Clippy &#8594; Pinnie)</h2><ol><li><p><strong>Appear only with confidence.</strong> Low-precision prompts erode trust fast.</p></li><li><p><strong>Help in place.</strong> Avoid modal hijacks; anchor near the friction.</p></li><li><p><strong>Make it dismissible, and remember.</strong> Respect &#8220;not now,&#8221; and learn from it.</p></li><li><p><strong>Short, specific, actionable.</strong> Offer the step that moves the user forward.</p></li><li><p><strong>Measure behavior change.</strong> Track impact on completion, retries, abandonment, not vanity clicks.</p></li></ol><h2><strong><br></strong>Closing note</h2><p>Clippy was an early, lovable misfire - a product of its moment and its limits. 1997&#8211;2007 taught the industry that help without context becomes an interruption. 2025 brings a different pattern: assistants that listen first, then assist. That&#8217;s the lane Autoplay (Pinnie) occupies - a context-aware, intent-informed guide that appears <em>only</em> when the product itself signals friction. The face isn&#8217;t the point; the moment is.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/the-rise-fall-and-rebirth-of-microsoft/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/the-rise-fall-and-rebirth-of-microsoft/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[From Reactive to Proactive: The Next Leap for Co-Pilots]]></title><description><![CDATA[We started Autoplay with a simple question:]]></description><link>https://blog.autoplay.ai/p/from-reactive-to-proactive-the-next</link><guid isPermaLink="false">https://blog.autoplay.ai/p/from-reactive-to-proactive-the-next</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Wed, 22 Oct 2025 14:51:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8aae3087-39d0-4a61-a076-d42063794ee4_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We started Autoplay with a simple question:</p><p><em><strong>What happens when software stops waiting for us?</strong></em></p><p>The history of software has always been about efficiency. Better UI, clearer workflows, faster time-to-value. Today, most tools feel good enough. The interfaces have converged into a shared design language; dropdowns, sidebars, dashboards. The cognitive tax of using new software is lower than ever.</p><p>But the irony is that as software became easier to use, it also became <em>easier to build</em>. Every product now looks and feels the same. AI code copilots have made shipping new features fast, but the end result is more homogeneity, not less. Everyone has the same design patterns - the same onboarding tours, the same chatbot in the corner of the screen, the same PLG playbook to &#8220;drive adoption.&#8221;</p><p>The next leap won&#8217;t come from prettier UIs or more polished onboarding. It will come from software that knows what you&#8217;re trying to do, and helps you do it <em>before you ask.</em></p><p>That&#8217;s where proactive agents come in.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VZ2d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VZ2d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!VZ2d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!VZ2d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!VZ2d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VZ2d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02c533ee-656f-4310-b994-337af8003d37_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1043132,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/176626601?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VZ2d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!VZ2d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!VZ2d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!VZ2d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02c533ee-656f-4310-b994-337af8003d37_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong><br>The Problem with Today&#8217;s Co-Pilots</strong></h2><p>Most &#8220;AI copilots&#8221; today are reactive. They&#8217;re powerful, but fundamentally <em>dumb</em> in context. They wait for you to tell them what to do, and then they execute it. They don&#8217;t observe, reason, or infer.</p><p>Even when they connect to APIs or automate workflows, they rely on prompting, which assumes the user knows what they want, how to phrase it, and how the system works underneath. That&#8217;s a big assumption.</p><p>Prompting feels magical the first few times, but it&#8217;s still the human doing the cognitive heavy lifting. You need to know how to frame the problem, which tool can solve it, and what parameters matter. It&#8217;s like having a brilliant assistant who can do anything, but only if you give them perfect instructions.</p><p>And that&#8217;s the paradox: prompting rewards expertise. It benefits people who already understand the system. For everyone else, it&#8217;s a new kind of friction, a UX regression disguised as progress.</p><p>We&#8217;ve seen this before. The earliest software required deep expertise to operate: command lines, nested menus, rigid workflows. Then came the UI revolution, which abstracted complexity away. Prompting reverses that, and puts the work back on the human.</p><p>So if the co-pilot still needs the pilot to fly, have we really changed anything?</p><h2><strong><br>Why the Future Is Proactive</strong></h2><p>The next generation of co-pilots will be proactive, not reactive.</p><p>They&#8217;ll <em>observe</em> what you&#8217;re trying to do, <em>anticipate</em> where you&#8217;re going, and <em>suggest</em> or <em>act</em> accordingly, like a colleague who&#8217;s already one step ahead.</p><p>Instead of waiting for a prompt, they&#8217;ll watch your behavior, detect hesitation, and know when to step in. They&#8217;ll understand the difference between someone exploring a feature and someone clearly stuck. They&#8217;ll see that you&#8217;re creating a campaign, not just editing a field, and they&#8217;ll know the typical next three steps because they&#8217;ve seen thousands of users like you.</p><p>That&#8217;s the future of software adoption, productivity, and support.</p><p>But it requires solving several hard problems that the industry mostly ignores right now.</p><h3><strong><br>1. Context Is Still Fragmented</strong></h3><p>Every tool defines the world in its own terms: &#8220;projects,&#8221; &#8220;deals,&#8221; &#8220;campaigns,&#8221; &#8220;workflows.&#8221; None of them talk to each other meaningfully.</p><p>So when an AI connects across them, it&#8217;s moving data, not understanding it. It can describe events, but not interpret goals.</p><p>We built TERRA at Autoplay to solve exactly this. It&#8217;s a unified ontology - a common language that describes how software behaves, what users are trying to do, and what efficient workflows (&#8220;golden paths&#8221;) look like.</p><p>It lets our models reason across apps and industries. To know that creating a &#8220;new project&#8221; in Asana, a &#8220;campaign&#8221; in HubSpot, or a &#8220;workflow&#8221; in ActiveCampaign are the same conceptual action - initiation toward an outcome.</p><p>Without that kind of layer, AI will stay narrow and reactive; task executors rather than intelligent collaborators.</p><h3><strong><br>2. Trust Is the Bottleneck</strong></h3><p>You can&#8217;t automate what people don&#8217;t trust.</p><p>If an AI acts without explaining why, users will override it or turn it off. For proactive systems, transparency becomes non-negotiable.</p><p>That means clear reasoning: &#8220;I noticed most users drop off at this step, so I simplified it,&#8221; or &#8220;I pre-filled this field because it matches your past patterns.&#8221;</p><p>The UX of automation is no longer just usability, it&#8217;s legibility.</p><p>People don&#8217;t need perfection; they need confidence that what&#8217;s happening makes sense.</p><p>Trust isn&#8217;t built through accuracy alone. It&#8217;s built through explanation.</p><h3><strong><br>3. Autonomy Needs a Dial</strong></h3><p>Autonomy is a spectrum, not a switch.</p><p>Too little, and the agent is annoying constantly asking for permission.</p><p>Too much, and it&#8217;s dangerous, making changes you didn&#8217;t consent to.</p><p>Proactive systems will need adaptive autonomy. They&#8217;ll start in observe-and-suggest mode, then gradually earn the right to act as they prove reliability.</p><p>Just like a human teammate, trust expands through consistent, predictable behavior.</p><h3><strong><br>4. Real-Time Grounding</strong></h3><p>Large language models are great with text but blind to state. They don&#8217;t actually see what&#8217;s on screen, which buttons exist, what data is visible, or whether an action succeeded.</p><p>Without real-time grounding, AI actions are educated guesses.</p><p>Proactive agents need to perceive the same reality the user does &#8211; live product state, current workflow, system feedback. That&#8217;s why grounding in UI understanding (not just API data) is so critical.</p><p>Without it, AI remains a clever autocomplete for your clicks.</p><h3><strong><br>5. Privacy and Access</strong></h3><p>To anticipate intent, agents must observe behavior. But observation at scale raises obvious privacy concerns.</p><p>Companies want automation, but they can&#8217;t afford invisible data flows. Users want help, but they don&#8217;t want surveillance.</p><p>The solution isn&#8217;t less context, it&#8217;s smarter context.</p><p>Local inference. Edge processing. Synthetic abstractions. Systems that learn patterns without storing personal data.</p><p>If proactive AI is going to work, it has to earn access, through governance, transparency, and clear data boundaries.</p><h2><strong><br>Where It All Leads</strong></h2><p>For years, &#8220;good UX&#8221; meant minimizing clicks. The next era will be about minimizing cognition.</p><p>The most powerful software will feel like intuition, where help arrives exactly when you need it, without you asking.</p><p>That doesn&#8217;t mean removing control; it means removing unnecessary translation. Users shouldn&#8217;t have to think in software logic. The software should think in theirs.</p><p>That&#8217;s the vision behind Autoplay.</p><p>We&#8217;re building systems that can see intent, hesitation, and deviation, and connect them through a unified understanding of how software works.</p><p>Because the real bottleneck in AI isn&#8217;t model quality; it&#8217;s context.</p><p>And the real opportunity isn&#8217;t faster execution; it&#8217;s faster understanding.</p><p>We believe the future of copilots is proactive - systems that act before you ask, learn before you teach, and adapt before you notice.</p><p>When that happens, software won&#8217;t just feel easier.</p><p>It&#8217;ll feel like it finally understands you.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The making of conversational UI: prompting/clicking or both?]]></title><description><![CDATA[What&#8217;s faster: clicking around in an interface, or prompting an AI to act on your behalf?]]></description><link>https://blog.autoplay.ai/p/the-making-of-conversational-ui-promptingclickin</link><guid isPermaLink="false">https://blog.autoplay.ai/p/the-making-of-conversational-ui-promptingclickin</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Tue, 14 Oct 2025 09:02:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a6979afc-8b4c-41e3-9a1a-f0e99735fcdd_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the simplest questions about the future of software is also the most revealing:</p><h2><strong>What&#8217;s faster: clicking around in an interface, or prompting an AI to act on your behalf?</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Yqp8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Yqp8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Yqp8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Yqp8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Yqp8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Yqp8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1608781,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/176077603?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Yqp8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Yqp8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Yqp8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Yqp8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54c44140-e373-46be-b0c6-194b55390f5a_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>At first glance, it feels obvious: AI should win. Why waste time dragging blocks or formatting text when you could just <em>ask</em> for it?</p><p>But when you zoom out, speed isn&#8217;t just about how fast the software responds. It&#8217;s about how quickly you can iterate toward the result you actually want.</p><p>And in that sense, the future isn&#8217;t a battle between prompting and UI - it&#8217;s a partnership.</p><h2><strong><br>Two Kinds of Thinking</strong></h2><p>Most of what we do inside software falls into one of two modes.</p><p><strong>Type 1: Execution.</strong></p><p>These are mechanical, operational tasks, formatting a slide, adding rows to a spreadsheet, moving blocks in Notion. They&#8217;re structured, predictable, and visual. You don&#8217;t need creativity; you need control. And here, the mouse wins. Clicking through a UI is faster, less effortful, and cognitively lighter.</p><p><strong>Type 2: Organization and Strategy.</strong></p><p>This is the messy stuff, turning a brain-dump into a meeting summary, clustering ideas after a brainstorming session, or shaping raw notes into an outline. It&#8217;s abstract and unstructured. Here, prompting shines. You can ask an AI to &#8220;make sense of this chaos,&#8221; and it will - faster than you ever could manually.</p><p>This idea isn&#8217;t new, it echoes psychology&#8217;s classic distinction between <strong>System 1</strong> and <strong>System 2</strong> thinking, introduced by Daniel Kahneman. System 1 is fast, intuitive, and automatic, like clicking through familiar actions without much thought. System 2 is slow, deliberate, and analytical; the mode we enter when we need to organize, plan, or reason through messy problems.</p><p>Software today forces you to toggle manually between these modes, and the next generation will adapt to whichever kind of thinking you&#8217;re doing in real time.</p><p><strong>Clicking is best for precision; prompting is best for sense-making.</strong></p><h2><strong><br>The Hybrid Workflow</strong></h2><p>The real magic happens when you stop thinking of prompting <em>versus</em> UI, and start moving fluidly between them.</p><p>You might ask AI to clean up your notes (a Type 2 task), then jump back into the interface to reorder or refine details (Type 1).</p><p>Each mode plays to the other&#8217;s strengths.</p><p>This back-and-forth, prompt, click, prompt again, becomes the new loop of productivity.</p><h3><strong><br>When Prompts Become Buttons</strong></h3><p>Over time, patterns emerge. You might find yourself typing the same prompt every morning:</p><p>&#8220;Show me my calendar events today.&#8221;</p><p>At some point, the system should notice.</p><p>Typing the same thing every day is slower than clicking once. So the prompt evolves into a button - &#8220;Show Calendar&#8221; - waiting for you next time you open the app.</p><p>That&#8217;s the principle: <strong>repetition should generate UI.</strong></p><p>Prompts are where discovery begins; buttons are where efficiency stays.</p><h3><strong><br>When Conversation Becomes Interface</strong></h3><p>Strategic, novel tasks will always begin as conversation. Routine, predictable tasks will always crystallize into UI.</p><p>The future interface blends both, a living surface that learns from how you think.</p><p>Conversation spawns new UI elements.</p><p>UI, in turn, accelerates your next conversation.</p><p>It&#8217;s not about talking <em>to</em> software anymore; it&#8217;s about software that talks <em>with</em> you.</p><p><strong>The Human&#8211;Computer Loop</strong></p><p>What this hybrid model really optimizes is the mental loop itself:</p><p><strong>Thought &#8594; Interaction &#8594; Thought.</strong></p><p>You externalize your messy ideas through a prompt.</p><p>You refine them through the UI.</p><p>You see the outcome, learn from it, and your next thought gets sharper.</p><p>It&#8217;s a cycle of cognitive offloading, software adapting to human thought, not the other way around.</p><h2><strong><br>The Big Picture</strong></h2><p>We&#8217;re heading toward software that can <em>think ahead</em> of us.</p><p>Conversational AI will be embedded directly inside interfaces.</p><p>You&#8217;ll move between prompting and clicking depending on what&#8217;s faster in the moment.</p><p>Over time, your software will:</p><ul><li><p>Learn which actions you repeat</p></li><li><p>Turn them into shortcuts</p></li><li><p>Let you spend prompts on the high-level, creative stuff</p></li></ul><p>The interface will evolve with you, each prompt giving birth to a new button, each button freeing up mental space for bigger ideas.</p><h2><strong><br>Where Autoplay Comes In</strong></h2><p>At <strong>Autoplay</strong>, we think the next leap is <em>before</em> the prompt.</p><p>We&#8217;re building a <strong>pre-intent layer</strong> - a system that predicts what you&#8217;re trying to do before you have to ask.</p><p>It reads behavioral context (clicks, hesitations, navigation paths) and infers intent in real time:</p><p><em>&#8220;Designing a presentation.&#8221; &#8220;Analyzing churn.&#8221; &#8220;Inviting a teammate.&#8221;</em></p><p>From there, it can surface contextual prompts, or take the action automatically.</p><p>If you hover over a chart, it might whisper: <em>&#8220;Explain this?&#8221;</em></p><p>If you reformat slides for the third time, it might offer: <em>&#8220;Want me to make them consistent?&#8221;</em></p><p>The assistant learns your rhythm, adapts its behavior, and eventually begins prompting the software on your behalf.</p><h3><strong><br>Measuring the Shift</strong></h3><p>This isn&#8217;t just UX polish. It&#8217;s cognitive optimization.</p><p>By predicting intent, we remove friction at three levels:</p><ol><li><p><strong>Intent formulation:</strong> you don&#8217;t have to decide what to ask.</p></li><li><p><strong>Prompt translation:</strong> you don&#8217;t have to phrase it perfectly.</p></li><li><p><strong>Iteration:</strong> you don&#8217;t have to re-prompt endlessly - the system adjusts automatically.</p></li></ol><p>The result: faster task completion, fewer prompts, and lower mental load.</p><h2><strong><br>The Endgame</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EacF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EacF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 424w, https://substackcdn.com/image/fetch/$s_!EacF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 848w, https://substackcdn.com/image/fetch/$s_!EacF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 1272w, https://substackcdn.com/image/fetch/$s_!EacF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EacF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png" width="1456" height="863" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:863,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:297723,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/176077603?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EacF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 424w, https://substackcdn.com/image/fetch/$s_!EacF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 848w, https://substackcdn.com/image/fetch/$s_!EacF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 1272w, https://substackcdn.com/image/fetch/$s_!EacF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07a2aca7-e3c9-420b-94d1-a21e625bda3a_2190x1298.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The future of software isn&#8217;t about choosing between conversation and control.</p><p>It&#8217;s about merging them; turning clicks into language, and language into action.</p><p>When AI can infer what you mean before you say it, prompting becomes invisible.</p><p>Software finally starts to feel like thought.</p><p>That&#8217;s where we&#8217;re headed:</p><h2><strong>A world where the UI adapts to how you think, and your thoughts flow directly into what the software does.<br><br></strong></h2><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Moving to Odoo: Here’s What Actually Changed]]></title><description><![CDATA[A few months ago, the admin work started to slow us down.]]></description><link>https://blog.autoplay.ai/p/moving-to-odoo-heres-what-actually</link><guid isPermaLink="false">https://blog.autoplay.ai/p/moving-to-odoo-heres-what-actually</guid><dc:creator><![CDATA[Greg]]></dc:creator><pubDate>Tue, 07 Oct 2025 13:02:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FMSe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few months ago, the admin work started to slow us down. <br><br>HR lived in too many places, time off wasn&#8217;t consistent, and signatures bounced between tools. Not a crisis, but just enough friction to steal focus. <br><br>So we moved the boring flows (HR + Sign) into <a href="https://www.odoo.com/">Odoo</a>. That&#8217;s when the interesting part began.</p><h2><strong><br>What Odoo Actually Gets Right</strong></h2><p>If you&#8217;re new to it: Odoo is a modular system that covers the unglamorous but essential parts of a company - HR, Inventory, Accounting, Sales, Helpdesk, e-Sign, you name it - <strong>in one ecosystem</strong>. No brittle bridges between five vendors. One data model. One permission model. One place to automate.</p><p>Small teams can start with a couple of modules (like we did). Larger orgs can go deep with custom workflows, multi-company, and audits <strong>without</strong> the typical &#8220;ERP for a year&#8221; timeline.</p><p>The pitch sounds simple. The compounding effect is not.<br><br>How it looks:<br><br><strong>With Odoo</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FMSe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FMSe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 424w, https://substackcdn.com/image/fetch/$s_!FMSe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 848w, https://substackcdn.com/image/fetch/$s_!FMSe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 1272w, https://substackcdn.com/image/fetch/$s_!FMSe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FMSe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png" width="858" height="598" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f352013e-3738-4901-b597-fadb639e5b3c_858x598.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:598,&quot;width&quot;:858,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:84628,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174621642?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FMSe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 424w, https://substackcdn.com/image/fetch/$s_!FMSe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 848w, https://substackcdn.com/image/fetch/$s_!FMSe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 1272w, https://substackcdn.com/image/fetch/$s_!FMSe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff352013e-3738-4901-b597-fadb639e5b3c_858x598.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Without Odoo</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZaZ_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 424w, https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 848w, https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 1272w, https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png" width="1157" height="701" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:701,&quot;width&quot;:1157,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:134609,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174621642?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 424w, https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 848w, https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 1272w, https://substackcdn.com/image/fetch/$s_!ZaZ_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f59ceb5-abda-460e-b1a6-48fdcba4097b_1157x701.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><br></p><h2><strong>The honest bit: what wasn&#8217;t smooth</strong></h2><p>Switching tools doesn&#8217;t erase habits. We still hit real friction:</p><ul><li><p><strong>Time-off:</strong> &#8220;Which policy applies? Who approves? Where do I see it?&#8221; Not always obvious.</p></li><li><p><strong>Sign:</strong> Templates vs. uploads created detours; a few folks still downloaded PDFs out of habit.</p></li><li><p><strong>Access:</strong> Hard to keep track of who had what across multiple tools - permission audits turned into Slack archaeology.</p></li><li><p><strong>Cost creep:</strong> Each app carried its own monthly fee and extra seats &#8220;just in case.&#8221; None painful alone, but together they added up - especially when usage was uneven.</p></li></ul><p>None of it was catastrophic. But friction (and subscription sprawl) kills adoption if you let it linger.</p><h2><strong><br></strong><em><strong>Adoption Ground Rules (Day 0)</strong></em></h2><p>The main suggestion: make the default path obvious. <br><br>In Odoo, half the adoption battle is reducing choices. Hide menus users don&#8217;t need (per role), set default list filters and saved views so people land where work actually happens, and make the role home (Favorites) point to just two things: &#8220;Request time off&#8221; and &#8220;Sign/Review.&#8221; <br><br>Pre-build a few Sign templates with fields anchored so no one ever drags boxes again. Use Activities to auto-create the next step (e.g., submit &#8594; manager gets &#8220;Approve Leave&#8221; task), and a simple Automation/Server Action to assign the right reviewer and auto-archive signed docs. </p><p>Fewer routes = fewer detours; the right behavior becomes the effortless one.<br></p><h2><strong>How we used Autoplay on our own Odoo project</strong></h2><p>Odoo&#8217;s power lies in its flexibility, and that also makes it&#8230; a lot. Even with a PLG mindset, you&#8217;re still dealing with new terms, deep menus, and &#8220;ten ways to do the same thing.&#8221; <br><br>We didn&#8217;t want our rollout to trade one kind of drag for another, so we ran <strong>Autoplay on our own Odoo</strong> from day one. The goal wasn&#8217;t to watch more replays - it was to keep a tight, repeatable loop that spots where confidence drops and fixes <em>those exact moments</em> before they spread.</p><p>Here&#8217;s what that looked like in practice:</p><h4><strong>1) Define the Golden Path (per flow, per role)</strong></h4><p>Set the &#8220;first-lap&#8221; version of success so we knew what we were measuring against.</p><ul><li><p><strong>Time-off (Employee &#8594; Manager):</strong> Request &#8594; Manager review &#8594; Approval &#8594; Calendar update</p></li><li><p><strong>Sign (Ops &#8594; Signer):</strong> Choose template &#8594; Fill fields &#8594; Send &#8594; Sign &#8594; Archive</p></li></ul><p>No heroics. Just the shortest path that builds trust and reduces &#8220;how do I&#8230;?&#8221; DMs.</p><h4><strong>2) Watch the signals (daily, ~15 minutes)</strong></h4><p>Autoplay turned usage into a quick morning read:</p><ul><li><p><strong>Hesitation:</strong> pause &#8594; hover-backtrack on steps like <em>Validate</em> / <em>Sign</em></p></li><li><p><strong>Workarounds:</strong> exporting or detouring outside the flow</p></li><li><p><strong>Deviations:</strong> skipping steps / bouncing between pages</p></li><li><p><strong>Cohorts:</strong> where <em>new joiners</em> vs <em>managers</em> struggled</p></li></ul><p>Odoo&#8217;s breadth means the same action can be reached three different ways. The point wasn&#8217;t to police exploration - it was to catch the moments where uncertainty consistently spiked.</p><h4><strong>3) Fix the moment, not the module</strong></h4><p>We resisted the &#8220;big training&#8221; instinct and targeted the exact sticking points:</p><ul><li><p><strong>Microcopy where people paused.</strong> (&#8220;What happens when I click <em>Sign</em>?&#8221; &#8594; &#8220;Sends for signature. You can cancel anytime before send.&#8221;)</p></li><li><p><strong>Standardized Sign templates.</strong> No guessing which fields belong where.</p></li><li><p><strong>Access clarity.</strong> Tightened permissions, added SSO, wrote down &#8220;who approves what.&#8221;</p></li><li><p><strong>Three 90-second clips</strong> for first-time use. Not a course - just the first lap.</p></li></ul><p>This wasn&#8217;t always one-and-done. Some tweaks didn&#8217;t move the needle. We rolled them back and tried the next smallest change. Odoo is flexible; your fixes should be too.</p><h4><strong>4) Re-check tomorrow</strong></h4><p>Same flows, same signals:</p><ul><li><p>Did <strong>hesitation</strong> drop at that step?</p></li><li><p>Did <strong>completion</strong> rise?</p></li><li><p>Did <strong>detours</strong> shrink?</p></li></ul><p>If yes, keep going. If not, try the next nudge. Over a few weeks, the result was fewer DMs, fewer detours, faster cycles - not because we ran a bootcamp, but because we removed the <em>specific</em> moments that made people unsure.</p><p><strong>Why did we do it this way?<br></strong>Odoo can absolutely be &#8220;set it and forget it&#8221; for simple teams, but its real value shows up when you lean into its depth. That depth also raises the adoption bar. Running Autoplay on our own rollout gave us a guardrail: if a change slowed people down, we saw it the next morning - not three weeks and ten meetings later.</p><p>Odoo&#8217;s flexibility is a feature, not a tax - <em>if</em> you pair it with a tight feedback loop. Define the path, watch the signals, fix the moment, check tomorrow. Repeat until confidence is the default.</p><h2><strong><br>Sell the Calm, Not the Tool</strong></h2><p>Don&#8217;t sell &#8220;Odoo&#8221; internally. Sell <strong>one calmer week</strong>: HR that isn&#8217;t a scavenger hunt; Sign that doesn&#8217;t stall; time-off that doesn&#8217;t need a thread.</p><p>Start with one flow. Define the path. Remove the scary step. Check tomorrow. Repeat.</p><p>Odoo gave us the rails. Autoplay helped us keep the train on them. The result wasn&#8217;t a big go-live moment. It was a steady reduction in friction until the boring work stopped being loud - and the meaningful work got more of our attention.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Autoplay Roadmap: How Our Thinking Evolved]]></title><description><![CDATA[When we started Autoplay, the dream was simple: real-time automation in software.]]></description><link>https://blog.autoplay.ai/p/the-autoplay-roadmap-how-our-thinking</link><guid isPermaLink="false">https://blog.autoplay.ai/p/the-autoplay-roadmap-how-our-thinking</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Thu, 02 Oct 2025 13:14:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4d81b33c-5568-4fb4-9546-8b23b484577c_2400x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we started Autoplay, the dream was simple: real-time automation in software.</p><p>A chatbot that could take a command - <em>&#8220;Schedule this report for me&#8221;</em> - and then go do it, directly in the UI.</p><p>But we ran into a hard truth: copilots don&#8217;t fail because the AI can&#8217;t click the right buttons. They fail because people don&#8217;t know what to ask.</p><p>Users don&#8217;t know what they don&#8217;t know. They don&#8217;t know which features exist, what&#8217;s possible, or how others in their org are using the product. And even if they did, they only want answers that are relevant to them in that moment. That was the core hypothesis Autoplay was built on:</p><ol><li><p>Users don&#8217;t know what they don&#8217;t know</p></li><li><p>Users only want to be shown or told what is relevant to them and what they don&#8217;t already understand</p></li><li><p>Users want to understand how others in their organization/team use the same software</p></li><li><p>Users will enjoy using the product more if it&#8217;s presented through a gamified experience</p></li></ol><p>That&#8217;s why we shifted from &#8220;do-it-for-me&#8221; copilots &#8594; into &#8220;show-me-why&#8221; intent detection.<br></p><h2><strong>Step One: Recording the Right Data</strong></h2><p>At first, we tried the brute force route: screenshots and computer vision.</p><p>It was slow, expensive, and it missed crucial context like hover time, mouse drift, hesitation.</p><p>Then Monday.com gave us a better idea:</p><p><em>&#8220;Why not plug into our session replays instead? See if AI can do more there.&#8221;</em></p><p>That unlocked everything.</p><h2><strong><br>Atlas: Why Session Replays Became Our Wedge</strong></h2><p>We realized something obvious in hindsight:</p><p>Product teams hate watching session replays, but they have to, because that&#8217;s where the &#8220;why&#8221; lives.</p><p>So we built <strong>Autoplay Atlas</strong>.</p><p>It takes raw replays and turns them into high-impact insights:</p><ul><li><p>Where users hesitate</p></li><li><p>What they know vs. don&#8217;t know</p></li><li><p>What goals they&#8217;re trying to achieve</p></li></ul><p>Instead of hours of replay watching, you get immediate clarity.</p><h2><strong><br>Building the Models</strong></h2><p>We started by converting video into frames and training models on hesitation, knowledge, intent.</p><p>Early versions meant uploading FullStory or Sentry links and getting back a summary.</p><p>From there, we layered on conversational intelligence into individual sessions: a chatbot you could ask,</p><p><em>&#8220;What can you tell me about what they user is trying to do&#8221;</em></p><p>By November, our models could reliably detect intent. Genie (our UI assistant) was starting to adapt in real time. We began asking customers whether they wanted us to integrate with existing replay tools (PostHog, FullStory, Hotjar) or replace them outright.</p><h2><strong><br>Scaling the Pipeline</strong></h2><h4><strong>February</strong></h4><ul><li><p>Shipped smarter NLP - questions like <em>&#8220;Show me hesitation during onboarding filters&#8221;</em> just worked.</p></li><li><p>Knowledge and hesitation models got sharper.</p></li><li><p>Added search across sessions + hypothesis testing.<br></p></li></ul><h4><strong>April</strong></h4><ul><li><p>Redesigned the timeline for clarity.</p></li><li><p>Launched collection summaries.</p></li><li><p>Search was upgraded with unsupervised clustering - surfacing emergent goals and golden path deviations.</p></li><li><p>PostHog came on board to help us scale infrastructure and co-sell.<br></p></li></ul><h4><strong>May</strong></h4><ul><li><p>Golden Path shipped: define your ideal workflow, then instantly see where users deviate.</p></li><li><p>We refined our ICP: PLG martech tools with clear adoption paths (campaigns, integrations, automations). Complex enough for AI to matter, narrow enough to measure impact.</p></li><li><p>Outbound focused here &#8594; first commit in from Sendlane.<br><br></p></li></ul><h2><strong>Thinking in First Principles: TERRA</strong></h2><p>As we mapped intent at scale, we developed <strong>TERRA</strong>:</p><p>Task, Event, Representation, Reasoning Architecture.</p><p>It&#8217;s the most efficient way we&#8217;ve found to connect raw clicks &#8594; to meaningful goals.</p><p>And it powers everything from hypothesis testing to real-time copilots.</p><h2><strong><br>Hypothesis Testing: One Flow, Not Ten Tools</strong></h2><p>The current adoption problem:</p><ul><li><p>Time-to-value is too slow.<br>Teams stitch together multiple tools to ask simple questions: <em>Is this drop-off caused by a bug, a bad UX pattern, or a user decision?</em></p></li></ul><p>Our answer: turn everything into tags.</p><ul><li><p>An Issue = tag:issue_onboarding.</p></li><li><p>A Golden Path = tag:golden_path_activation.</p></li><li><p>A Hypothesis = tag:hypothesis_filters_confusion.</p></li></ul><p>Instead of bouncing between views, you stay in one unified search canvas. Every step is traceable, shareable, repeatable. The agent can run the flow end-to-end: check for bugs, compare hesitation before/after, classify outcomes as decision vs. UX.</p><p>It&#8217;s hypothesis testing, but <strong>agentic</strong>.</p><h2><strong><br>Where We&#8217;re Headed</strong></h2><p>We&#8217;re pushing into agent-driven workflows, where Autoplay doesn&#8217;t just <em>surface</em> friction but <em>tests</em> it automatically.</p><p>Future R&amp;D:</p><ul><li><p>Labeling data via chatbot interactions &#8594; supervised learning + RLHF.</p></li><li><p>Session replay&#8211;powered copilots that intervene in real time.</p></li><li><p>Tight integrations with PostHog error tracking, cohorts, and feature flags.</p></li></ul><p>The long-term vision loops back to where we started: real-time copilots. But this time, grounded in user intent, context, and data.<br><br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Goals, Not Clicks]]></title><description><![CDATA[Funnels are great at telling a story that isn&#8217;t the real one.]]></description><link>https://blog.autoplay.ai/p/goals-not-clicks</link><guid isPermaLink="false">https://blog.autoplay.ai/p/goals-not-clicks</guid><dc:creator><![CDATA[Greg]]></dc:creator><pubDate>Tue, 30 Sep 2025 16:11:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0e10680f-f0e6-46ca-8b86-bb9edfb81d6b_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Funnels are great at telling a story that isn&#8217;t the real one.</p><p>&#8220;Step 3 drop-off is 47%.&#8221;</p><p>Okay - but what were users actually trying to do? Did they intend to finish that flow, or were they just browsing? Did they get blocked by a bug, by the UX, or because it wasn&#8217;t the right task for them in the first place?</p><p>When the unit of analysis is a <strong>click</strong>, everything looks like a funnel problem.</p><p>When the unit of analysis is a <strong>goal</strong>, prioritization gets simple.</p><p>This write-up lays out a quiet shift that changes how growth work gets done: define real user goals, verify completion, then decide - sharpen the path or ship something new.</p><h2><strong><br>Start from intent (not events)</strong></h2><p>Before instrumenting anything, pin down <strong>who</strong> and <strong>why</strong>:</p><ul><li><p><strong>ICP / Industry / Plan:</strong> early-stage SaaS, SMB vs mid-market, free vs trial vs paid.</p></li><li><p><strong>Role:</strong> admin, operator, manager, exec.</p></li><li><p><strong>Use case:</strong> &#8220;Send first campaign,&#8221; &#8220;Invite the team,&#8221; &#8220;Export a board,&#8221; &#8220;Create an automation.&#8221;</p></li></ul><p>Then translate that into a plain-language goal:</p><p>&#8220;Create and schedule the first campaign.&#8221;</p><p>&#8220;Invite two teammates and assign roles.&#8221;</p><p>&#8220;Connect a data source and sync once.&#8221;</p><p>If intent isn&#8217;t explicit, completion rates won&#8217;t mean much.</p><h2><strong><br>Define &#8220;done&#8221; like a contract</strong></h2><p>A goal without proof is just a wish. Give each goal a <strong>Done Definition</strong>:</p><ul><li><p><strong>Start signal:</strong> the first clear action that commits to the goal (e.g., &#8220;Clicked <em>New Campaign</em>&#8221;).</p></li><li><p><strong>Completion signal:</strong> the irreducible proof (e.g., &#8220;Campaign scheduled&#8221; event with a valid audience).</p></li><li><p><strong>Quality bar:</strong> lightweight guardrails (e.g., audience &#8805; 1, no validation errors).</p></li><li><p><strong>Timeout window:</strong> how long counts as the same attempt.</p></li></ul><p>Now the metric isn&#8217;t &#8220;page views.&#8221; It&#8217;s <strong>Goal Completion</strong>.</p><h2><strong><br>The five numbers that matter</strong></h2><p>For each goal, track:</p><ol><li><p><strong>Seen</strong> &#8211; users who encountered the goal entry point</p></li><li><p><strong>Started</strong> &#8211; users who committed to it</p></li><li><p><strong>Completed</strong> &#8211; users who hit the proof of done</p></li><li><p><strong>Time to Goal</strong> &#8211; median time from start &#8594; done</p></li><li><p><strong>Hesitation Rate</strong> &#8211; % of attempts with pause / loop / backtrack patterns</p></li></ol><p>(Optionally add <strong>Assist Rate</strong>: cases needing help - chat, tooltip, doc - before completion.)</p><p>These five beat a dozen charts because they tell the whole arc: </p><p>intent &#8594; execution &#8594; confidence.</p><h2><strong><br>A simple decision tree for roadmap calls</strong></h2><p>Once the numbers are on the board, decisions get boring - in a good way.</p><p><strong>A. High intent, low completion</strong></p><ul><li><p>Likely cause: bug or UX/process friction.</p></li><li><p>What to check: hesitation spikes on specific steps, repeat errors, back-and-forth loops.</p></li><li><p>Action: fix the moment (microcopy, defaults, step order), not the whole module. Re-measure in 48 hours.</p></li></ul><p><strong>B. Low intent, high completion when started</strong></p><ul><li><p>Likely cause: positioning / discovery problem.</p></li><li><p>What to check: who <em>sees</em> the entry point, source campaign, feature findability.</p></li><li><p>Action: move, rename, or pre-qualify the entry; teach benefits earlier; target the right cohort.</p></li></ul><p><strong>C. One power user, everyone else idle</strong></p><ul><li><p>Likely cause: org-level adoption gap.</p></li><li><p>What to check: invites, role assignments, teammate comparisons.</p></li><li><p>Action: targeted enablement for <em>named</em> users on <em>named</em> steps. Don&#8217;t run a 50-person training.</p></li></ul><p><strong>D. High completion, slow Time to Goal</strong></p><ul><li><p>Likely cause: cognitive load.</p></li><li><p>What to check: steps with long dwell, unnecessary fields, decision bottlenecks.</p></li><li><p>Action: remove fields, prefill defaults, progressive disclose. Shave minutes, not pixels.</p></li></ul><h2><strong><br>Weekly &#8220;goals review&#8221;</strong></h2><p>Thirty minutes is enough if the inputs are clean.</p><p><strong>Prep (15 min)</strong></p><ul><li><p>Pick 1&#8211;3 goals that drive money (activation, upgrade, expansion).</p></li><li><p>Sort by: high <em>Seen</em>, low <em>Completed</em>, or spiking <em>Hesitation</em>.</p></li><li><p>Draft a cause hypothesis: bug / UX / knowledge / discovery.</p></li></ul><p><strong>Meeting (15 min)</strong></p><ul><li><p>For each goal: show intent, completion, time, hesitation.</p></li><li><p>Agree the smallest change that could move the number in 48 hours.</p></li><li><p>Assign owner, deadline, and the single metric that will confirm it worked.</p></li></ul><p>Everything else goes on a parking lot. The point is momentum.</p><h2><strong><br>Instrumentation that won&#8217;t eat your week</strong></h2><p>You don&#8217;t need a NASA stack to start. A minimal, durable setup:</p><ul><li><p><strong>Events:</strong> goal_started, goal_completed, assist_shown, error_name, step_name.</p></li><li><p><strong>Context:</strong> role, plan, source campaign, account size.</p></li><li><p><strong>Tags / segments:</strong> ICP, industry, use case.</p></li><li><p><strong>Replay (optional):</strong> only to sample the stuck steps, not for doom-scrolling.</p></li></ul><p>Save the query as &#8220;Goal: {name} - Weekly.&#8221; Rinse, repeat.</p><h2><strong><br>What to look for in behavior (the tells)</strong></h2><ul><li><p><strong>Hesitation clusters:</strong> pause &#8594; hover &#8594; backtrack on the same control (&#8220;Validate,&#8221; &#8220;Confirm,&#8221; &#8220;Post&#8221;).</p></li><li><p><strong>Deviations from the golden path:</strong> unnecessary detours, step skipping, tab ping-pong.</p></li><li><p><strong>Workarounds:</strong> export &#8594; spreadsheet &#8594; re-import to &#8220;make it work.&#8221;</p></li><li><p><strong>Form thrash:</strong> repeated field edits, validation loops, error-copy rereads.</p></li><li><p><strong>Outlier time:</strong> steps where one cohort is 2&#8211;3&#215; slower than peers.</p></li><li><p><strong>Org imbalance:</strong> one &#8220;hero&#8221; user vs. idle teammates.</p></li></ul><p>These tells separate &#8220;needs a tooltip&#8221; from &#8220;needs a redesign&#8221; from &#8220;needs a fix.&#8221;</p><h2><strong><br>Two quick examples</strong></h2><p><strong>Upgrade intent, stalled:</strong> lots of pricing views + team invites, few plan changes.</p><ul><li><p>Read: high <em>why</em>, broken <em>how</em>.</p></li><li><p>Likely fix: make upgrade the natural next step in-flow; clarify limits; offer safe preview of paid features.</p></li></ul><p><strong>First automation, never activated:</strong> many starts, long time, low completion.</p><ul><li><p>Read: motivation exists, confidence doesn&#8217;t.</p></li><li><p>Likely fix: template first; prefilled sample; show a dry-run result; rename scary steps; add &#8220;undo.&#8221;</p></li></ul><p>In both cases, the question isn&#8217;t &#8220;what page is worst?&#8221; It&#8217;s &#8220;what goal is failing, for whom, and why?&#8221;</p><h2><strong><br>Where Autoplay fits (and what we learned building it)</strong></h2><p>We built Autoplay around this exact loop. Instead of highlighting clicks, it detects <strong>intent</strong>, <strong>hesitation</strong>, and <strong>deviation</strong> with UI awareness, then maps them to goals: who tried, who finished, and what got in the way. The value isn&#8217;t another dashboard; it&#8217;s faster answers to the only question that matters for growth:</p><p>Do users achieve what they came to do - and if not, is the win fixing the path or adding capability?</p><p>Use whatever stack you have to run the loop. Autoplay just compresses the time between &#8220;we think&#8221; and &#8220;we know.&#8221;</p><h2><strong><br>The quiet north star</strong></h2><p>Teams love big north-star metrics. Here&#8217;s a smaller one that moves them all: <strong>Time to Confidence</strong> - the time it takes from intent to &#8220;I can do this unaided.&#8221;</p><p>Shorten that, and activation rises, upgrades stick, expansions feel obvious.</p><p>Miss it, and the roadmap fills with features that look good in a deck and gather dust in the product.</p><p>Name the goal. Define &#8220;done.&#8221; Watch the tells. Fix the moment.</p><p>When the unit is a goal, growth becomes a series of easy decisions.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI isn't just good for automation, It's amazing for categorization]]></title><description><![CDATA[Everyone is talking about AI for automation, but the truth is that so far, AI agents aren&#8217;t proving fully reliable - but categorization is.]]></description><link>https://blog.autoplay.ai/p/ai-isnt-just-good-for-automation</link><guid isPermaLink="false">https://blog.autoplay.ai/p/ai-isnt-just-good-for-automation</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Fri, 26 Sep 2025 13:59:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/40c9a0d8-3d34-4d02-a86a-5a5184db8725_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Low Success Rates in Real-World Tasks</strong></h2><p>A study by Carnegie Mellon University and Salesforce found that state-of-the-art AI agents succeed only about 30&#8211;35% of the time on multi-step office tasks. </p><p>For simpler, single-turn tasks, the success rate is higher (around 58%) but it drops sharply as tasks get more complex. <a href="https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/?utm_source=chatgpt.com">The Register</a>.</p><p>One use case that&#8217;s been surprisingly successful, <em>but often overlooked,</em> is AI for <strong>automated tagging and categorization.</strong></p><h2><strong><br>Take Mem AI, for example:</strong></h2><p><a href="https://get.mem.ai/">Mem.ai</a> has a feature called <strong>AI-Powered Tagging</strong>, which analyzes the content of your notes and assigns relevant tags automatically. </p><p>They also offer <em>Collections</em>, which neatly group notes by content and context, so you don&#8217;t waste time dragging things into folders.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YHvB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YHvB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 424w, https://substackcdn.com/image/fetch/$s_!YHvB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 848w, https://substackcdn.com/image/fetch/$s_!YHvB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 1272w, https://substackcdn.com/image/fetch/$s_!YHvB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YHvB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif" width="1456" height="878" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:878,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:27852,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/avif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174240381?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YHvB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 424w, https://substackcdn.com/image/fetch/$s_!YHvB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 848w, https://substackcdn.com/image/fetch/$s_!YHvB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 1272w, https://substackcdn.com/image/fetch/$s_!YHvB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc34221e5-c904-4c92-849c-0fbe4077657a_1749x1055.avif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And it&#8217;s not just Mem. Even workflow tools like <a href="https://zapier.com/">Zapier</a> are teaming up with <a href="https://evernote.com/">Evernote</a> to bring AI into tagging, making note organization smoother than ever.</p><h2><strong><br>Superhuman took AI tagging to emails:</strong></h2><p>One of the biggest challenges with emails? There are just <em>too many emails</em>. </p><p><a href="https://superhuman.com/">Superhuman</a> solved for this with their auto-labels. Their system instantly tags emails as &#8220;Pitch,&#8221; &#8220;Marketing,&#8221; and more.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!d0gO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!d0gO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 424w, https://substackcdn.com/image/fetch/$s_!d0gO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 848w, https://substackcdn.com/image/fetch/$s_!d0gO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 1272w, https://substackcdn.com/image/fetch/$s_!d0gO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!d0gO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png" width="1170" height="686" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:686,&quot;width&quot;:1170,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75820,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174240381?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!d0gO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 424w, https://substackcdn.com/image/fetch/$s_!d0gO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 848w, https://substackcdn.com/image/fetch/$s_!d0gO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 1272w, https://substackcdn.com/image/fetch/$s_!d0gO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90f46702-00f0-436f-a03e-c754ac28150d_1170x686.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s been a game-changer. Instead of getting lost in the noise, I can zero in on what&#8217;s actually important. If I want to catch up on pitches or dive into marketing emails, I do it on my schedule - not when they land in my inbox.</p><p>With Superhuman, staying organized feels effortless, and you finally get to spend your attention where it counts.</p><h2><br><strong>AI tagging plans for Autoplay</strong></h2><p>One of the biggest challenges with session replays? There&#8217;s just <em>too much data</em>. </p><p>Teams often get stuck - unsure which sessions matter, what questions to ask, or how to find the insights they actually need.</p><p><strong>So we are following Superhuman&#8217;s footsteps:</strong></p><p>Instead of building yet another &#8220;AI that does it all for you,&#8221; Autoplay turns everything into a tag. Those tags become the foundation for exploring user behavior, forming better hypotheses, and measuring real impact.</p><p>Just like Superhuman reinvented email with smart auto-labels, we&#8217;re reimagining how teams make sense of user behavior.</p><h2><strong><br>Everything becomes a tag - and tags become your filters</strong></h2><p>In Autoplay<strong>, Intents,</strong> <strong>Issues, Golden Paths, Hypotheses, Causes, and Segments</strong> aren&#8217;t special entities. They&#8217;re just tags on sessions.</p><ul><li><p>Clicking into an Issue? It applies tag:issue_onboarding</p></li><li><p>Exploring a Golden Path? You&#8217;re seeing sessions with tag:golden_path_onboarding_success</p></li><li><p>Saving a hypothesis? That&#8217;s just another tag - tag:hypothesis_validation_error</p></li></ul><p>So instead of hopping between dashboards, everything happens in one place: the Search &amp; Filter canvas. It&#8217;s the only view you need.</p><h2><strong><br>You investigate by stacking tags</strong></h2><p>Let&#8217;s say you&#8217;re trying to understand onboarding drop-off:</p><ol><li><p>Start with sessions tagged issue: onboarding</p></li><li><p>Filter out noise: exclude sessions under 1min, remove internal testers</p></li><li><p>Notice a pattern: login problems. Tag those with cause:incorrect_credentials</p></li><li><p>Compare segments:</p><ul><li><p>Group A: onboarding + login errors</p></li><li><p>Group B: onboarding &#8211; login errors</p></li></ul></li><li><p>Save that as a hypothesis. Now you can track it, share it, and test it against metrics like hesitation, conversion, or time to goal.</p></li></ol><h2><strong><br>Every step is saved and traceable</strong></h2><p>Because tags are saved to sessions:</p><ul><li><p>You never lose context</p></li><li><p>You can always reload your exact filter stack</p></li><li><p>You can share investigations like saved states - not screenshots</p></li><li><p>Anyone on your team can pick up right where you left off</p></li></ul><h2><strong><br>AI That Organizes Is What&#8217;s Actually Working</strong></h2><p>There&#8217;s a lot of noise right now about AI doing the work for you. But in practice, most of it breaks down on anything more complex than a one-step task.</p><p>What has quietly proven useful is AI that helps you stay organized, tagging, sorting, and structuring. Not replacing thinking, just making it easier to do.</p><p>That&#8217;s the pattern we&#8217;ve seen again and again. The tools that help you keep track, not take over, are the ones people keep using.<br><br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Why Post-Deployment Adoption Fails in Odoo Projects and how to fix it.]]></title><description><![CDATA[We just returned from the Odoo Experience in Brussels with one big takeaway:]]></description><link>https://blog.autoplay.ai/p/why-post-deployment-adoption-fails</link><guid isPermaLink="false">https://blog.autoplay.ai/p/why-post-deployment-adoption-fails</guid><dc:creator><![CDATA[Greg]]></dc:creator><pubDate>Wed, 24 Sep 2025 14:02:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9bc69b8c-1ec6-45ad-9beb-f0d3540fff42_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We just returned from the Odoo Experience in Brussels with one big takeaway:</p><p>Every partner is grappling with the same issue - <strong>post-deployment adoption.</strong></p><p>To manage it, most rely on frequent check-ins:</p><p><em><strong>&#8220;We meet weekly, sometimes twice a week, and work through issues with the client.&#8221;</strong></em></p><p>It&#8217;s a reasonable approach when the client is small and the team is just 10&#8211;20 people. You can talk to everyone directly. You can keep a mental model of who&#8217;s struggling, what flows they&#8217;re using, and where the friction is.</p><p>But once the client scales to 50, 100, or 300+ users, that model breaks.</p><p><strong>Weekly calls + surveys don&#8217;t scale.</strong></p><p>They&#8217;re time-consuming, biased, and reactive. And while several partners reminded us that adoption isn&#8217;t technically their responsibility (it&#8217;s not in the contract), they also admitted that if adoption lags, the customer won&#8217;t be happy - so they end up doing it anyway to ensure satisfaction and retention.</p><p>Partners are responsible for delivery, but they&#8217;re judged (rightly) on adoption, and there&#8217;s no reliable, proactive way to see what&#8217;s actually happening across the org between meetings.</p><h2><strong><br>The gap we kept hearing</strong></h2><p><strong>&#8220;We do lots of meetings.&#8221;<br><br></strong>Good for relationships, bad for time.</p><p>They&#8217;re <strong>not targeted at the right employees</strong>, and you end up <strong>guessing which module/section</strong> is the problem - so the real blockers stay hidden and the quiet issues still get missed.<br><br><em>Reality:</em> most partners run <strong>weekly / fortnightly / monthly</strong> check-ins and <strong>only speak with the manager</strong> (they can&#8217;t afford 1:1s with 50-300 employees). The manager then <strong>summarizes adoption issues</strong> - but they&#8217;re not seeing usage firsthand, so it&#8217;s <strong>biased and incomplete</strong>. Lots of struggles go <strong>unreported</strong>.</p><ul><li><p><strong>&#8220;We run internal surveys.&#8221;</strong> Helpful, but slow to interpret, and often biased by who answers.<br><br> <em>Hidden assumption:</em> this only works if managers have a <strong>reliable intake system</strong> to collect struggles from their teams. Most don&#8217;t; it&#8217;s ad-hoc Slack threads, forms no one fills, and anecdotes.<br></p></li><li><p><strong>&#8220;We&#8217;re responsible for success, but blind to the day-to-day.&#8221;</strong> Partners only hear about issues <strong>after</strong> they get loud.<br></p></li><li><p><strong>Retention is competitive.</strong> In a crowded partner market, <strong>proactive adoption</strong> is what keeps clients.</p></li></ul><h3><strong><br>Why weekly calls and surveys don&#8217;t scale</strong></h3><ul><li><p><strong>Limited coverage:</strong> A 45-minute call can&#8217;t tell you what&#8217;s happening across hundreds of users and dozens of workflows. You hear the loudest voice, not the full picture.</p></li><li><p><strong>High latency:</strong> By the time a problem makes it into a meeting, it&#8217;s already compounded. Friction becomes workaround. Confidence erodes. Recovery is slow.</p></li><li><p><strong>Manager filters &amp; bias:</strong> Most check-ins run through one person, usually a manager, who isn&#8217;t watching usage. So they summarize what they <em>think</em> is happening. That&#8217;s often off.</p></li><li><p><strong>No precision:</strong> You don&#8217;t know <em>who</em> struggled or <em>where</em>. So you default to &#8220;invite everyone&#8221; training, not &#8220;train these 7 users on this exact step.&#8221;</p></li><li><p><strong>Rising cost, shallow truth:</strong> Fortnightly calls across accounts burn time and rarely surface ground truth.</p></li><li><p><strong>Reactive and guessy:</strong> Even if adoption isn&#8217;t your contractual responsibility, retention depends on it, so you&#8217;re on the hook, but flying blind.</p></li></ul><p><strong>What user analytics gives you instead</strong></p><ul><li><p><strong>Breadth over anecdotes:</strong> See patterns across <em>all</em> users, not just whoever joined the call.</p></li><li><p><strong>Precision:</strong> Spot the exact module, step, and cohort that&#8217;s struggling, and how badly.</p></li><li><p><strong>Prioritization:</strong> Triage based on reach (sessions/users) and impact (roles, goals, revenue risk).</p></li><li><p><strong>Fast, targeted fixes:</strong> Decide if it&#8217;s a bug, a UX flaw, or a training issue, and act accordingly.</p></li><li><p><strong>Proactive motion:</strong> Quiet failures like hesitation, loops, or workarounds don&#8217;t rely on someone speaking up. You spot and solve before they escalate.</p></li></ul><h2><strong><br>What &#8220;good&#8221; looks like</strong></h2><p>To get a better understanding of adoption within Oodoo, you need a simple, repeatable motion:</p><h4><strong>1) Define &#8220;correct use&#8221; (Optimal Paths)</strong></h4><p>By module and role. Keep them versioned.</p><p>Examples:</p><ul><li><p><em>Purchase &#8594; Receipt &#8594; Putaway (move received items to storage bins) &#8594; Stock</em></p></li><li><p><em>Sales Order &#8594; Invoice &#8594; Payment</em></p></li></ul><h4><strong>2) Segment your cohorts</strong></h4><ul><li><p>Role (power users vs operators vs managers)</p></li><li><p>Team/site (Warehouse A vs B; EU vs US)</p></li><li><p>Tenure (new vs experienced).</p></li></ul><h4><strong>3) Track four core signals</strong></h4><ul><li><p><strong>Coverage</strong> - % of sessions/users touching a flow</p></li><li><p><strong>Completion</strong> - % finishing successfully</p></li><li><p><strong>Hesitation</strong> - pauses/loops/retries at a step (e.g., <em>Validate</em>, <em>Confirm</em>, <em>Post</em>)</p></li><li><p><strong>Workarounds</strong> - leaving the flow (export &#8594; spreadsheet &#8594; re-import) to &#8220;make it work&#8221;</p></li></ul><h4><strong>4) Run a weekly adoption review (30-min prep &#8594; 30-min call)</strong></h4><ul><li><p><strong>Before the call:</strong> sort by <strong>coverage &#215; hesitation &#215; completion drop</strong>; pick top 2&#8211;3 flows; assign likely cause: <strong>bug / UX-process / knowledge</strong>; propose an intervention.</p></li><li><p><strong>On the call:</strong> show <strong>what happened, who&#8217;s affected, why, and what we&#8217;ll do</strong>; confirm owners &amp; timelines.</p></li></ul><h4><strong>5) Match the fix to the cause</strong></h4><ul><li><p><strong>Bug</strong> &#8594; ticket with repro + before/after impact</p></li><li><p><strong>UX / process</strong> &#8594; simplify steps, clarify microcopy, remove dead-ends</p></li><li><p><strong>Knowledge</strong> &#8594; <strong>targeted training</strong> for <strong>named users on named steps</strong> (don&#8217;t invite 50 people)</p></li></ul><h4><strong>6) Close the loop next week</strong></h4><p>Re-check the same flow: hesitation down? completion up? coverage back on path? If not, revisit the cause.</p><p>If you&#8217;re starting from zero, a clean spreadsheet beats a stack of meeting notes: flow/step, module, role, completion flag, time-to-complete, hesitation proxy, workaround tag, user/session counts.</p><h2><strong><br>What to look for each week (signals that matter)</strong></h2><ul><li><p><strong>Workflow drifts:</strong> skipping, backtracking, or long detours in core flows</p></li><li><p><strong>Hesitation spikes:</strong> pause &#8594; re-try &#8594; hover-backtrack on steps like <em>Validate / Confirm / Post</em></p></li><li><p><strong>Dead-ends &amp; rage clicks:</strong> repeat clicks on disabled/blocked elements</p></li><li><p><strong>Workarounds:</strong> leaving native flows to &#8220;hack&#8221; completion elsewhere</p></li><li><p><strong>Outlier time-to-complete:</strong> steps or cohorts much slower than peers</p></li><li><p><strong>Form friction &amp; errors:</strong> repeated corrections/validation failures on the same fields</p></li><li><p><strong>Cohort concentration:</strong> specific roles/sites disproportionately affected</p></li></ul><p><strong>Triage &#8594; Intervention &#8594; Proof.</strong> That&#8217;s the loop.</p><h2><strong><br>If you want to compress the time: where Autoplay helps</strong></h2><p>Use the playbook above with any stack. <strong>Autoplay</strong> just makes it faster and less guessy by turning raw usage into actionable signals <strong>per module, per flow, per cohort</strong>:</p><ul><li><p><strong>Issue Dashboard (weekly starting point):</strong> automatically surfaces top friction patterns by module/flow/cohort with cluster stats (e.g., <em>Sessions: 68 &#183; Coverage: 32.5% &#183; Unique Users: 3 &#183; Hesitation Score: 29</em>).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dUYt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dUYt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 424w, https://substackcdn.com/image/fetch/$s_!dUYt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 848w, https://substackcdn.com/image/fetch/$s_!dUYt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 1272w, https://substackcdn.com/image/fetch/$s_!dUYt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dUYt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png" width="1456" height="752" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:752,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:221360,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174429951?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dUYt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 424w, https://substackcdn.com/image/fetch/$s_!dUYt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 848w, https://substackcdn.com/image/fetch/$s_!dUYt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 1272w, https://substackcdn.com/image/fetch/$s_!dUYt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9795dd83-4409-4354-bdbc-3b801357293a_1920x992.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p></li><li><p><strong>Golden Paths:</strong> define once; see exactly where users deviate or hesitate.<br></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_S2s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_S2s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 424w, https://substackcdn.com/image/fetch/$s_!_S2s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 848w, https://substackcdn.com/image/fetch/$s_!_S2s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 1272w, https://substackcdn.com/image/fetch/$s_!_S2s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_S2s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png" width="1456" height="752" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:752,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:142258,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174429951?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_S2s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 424w, https://substackcdn.com/image/fetch/$s_!_S2s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 848w, https://substackcdn.com/image/fetch/$s_!_S2s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 1272w, https://substackcdn.com/image/fetch/$s_!_S2s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc1fd127-4f55-47bc-becd-da1f26580319_1920x992.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p></li><li><p><strong>Intent / knowledge / hesitation:</strong> UI-aware analysis of what users tried to do, where they got stuck, and <strong>why</strong> (bug vs UX/process vs knowledge).</p></li><li><p><strong>Clusters, not one-offs:</strong> fix patterns instead of chasing individual replays.</p></li><li><p><strong>Agentic Q&amp;A (Perplexity Flow):</strong> ask, <em>&#8220;Which module spiked this week?&#8221; &#8220;Bug or knowledge gap?&#8221; &#8220;Which power users are blocked?&#8221;</em> and get scoped answers with evidence.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V0vE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V0vE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 424w, https://substackcdn.com/image/fetch/$s_!V0vE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 848w, https://substackcdn.com/image/fetch/$s_!V0vE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 1272w, https://substackcdn.com/image/fetch/$s_!V0vE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V0vE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png" width="1456" height="752" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:752,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:106100,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.autoplay.ai/i/174429951?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V0vE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 424w, https://substackcdn.com/image/fetch/$s_!V0vE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 848w, https://substackcdn.com/image/fetch/$s_!V0vE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 1272w, https://substackcdn.com/image/fetch/$s_!V0vE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02423b14-8c63-420a-94dc-265d49c7e30a_1920x992.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p></li><li><p><strong>Real-time chat assist:</strong> when someone hovers/loops/asks the bot, Autoplay inspects the prior interactions, matches similar successful sessions, and <strong>guides them through Odoo&#8217;s UI in context</strong> - proactive, not generic.</p></li></ul><h2><strong><br>What a better weekly actually looks like</strong></h2><p><strong>30 minutes before the call:</strong></p><ul><li><p>Open the Issue Dashboard &#8594; review the <strong>top 3 clusters by coverage</strong>.</p></li><li><p>Note high-hesitation modules (e.g., <em>Inventory &gt; Stock Moves</em>).</p></li><li><p>Ask: <em>&#8220;What&#8217;s the likely cause for Cluster #1?&#8221;<br></em>&#8594; <em>Knowledge gap in Putaway for warehouse power users; repeated hover-backtrack on Validate.</em></p></li><li><p>Ask: <em>&#8220;Any bug signatures?&#8221;<br></em>&#8594; Compare before/after windows; flag real defects if present.</p></li><li><p>Decide the intervention:</p><ul><li><p><strong>Bug</strong> (ticket)</p></li><li><p><strong>UX</strong> (microcopy/flow tweak)</p></li><li><p><strong>Knowledge</strong> (targeted training for <strong>named cohorts</strong>).</p></li></ul></li></ul><p><strong>Walk into the call with:</strong></p><ul><li><p>The issues that matter</p></li><li><p>The users affected</p></li><li><p>The cause</p></li><li><p>The recommended fix.</p></li></ul><p><strong>One weekly, not two. Less guessing, more outcomes.</strong></p><h2><strong><br>Why this matters to Odoo partners</strong></h2><ul><li><p><strong>Proactive retention:</strong> show up with <strong>evidence</strong>, not anecdotes - what broke, where, for whom.</p></li><li><p><strong>Targeted training (instant ROI):</strong> identify <strong>which users</strong> are struggling and <strong>on which step</strong> inside <strong>which module</strong>; don&#8217;t pull everyone into a generic session.</p></li><li><p><strong>Fewer internal meetings:</strong> replace &#8220;let&#8217;s ask everyone&#8221; with &#8220;<strong>here&#8217;s what&#8217;s happening</strong>,&#8221; plus named cohorts to address.</p></li><li><p><strong>Less bias, more truth:</strong> behavior + intent beats manager summaries and survey guesswork.</p></li><li><p><strong>Clear accountability:</strong> tie issues to modules, flows, and specific users; ship the fix; <strong>close the loop</strong> next week.</p></li></ul><h2><strong><br>A quick scenario</strong></h2><ul><li><p><strong>Observation:</strong> Inventory shows <strong>32%</strong> of sessions deviating at <strong>Putaway</strong> (moving received items into their storage bins).</p></li><li><p><strong>Cause:</strong> No defect. Users hover &#8594; backtrack on <strong>Validate</strong>. <strong>Knowledge gap</strong>.</p></li><li><p><strong>Action:</strong> 30-minute micro-training for warehouse power users + inline copy tweak.</p></li><li><p><strong>Follow-up:</strong> Next week, hesitation drops; completion rises. Done.<br>(Compare that to two calls of &#8220;what&#8217;s not working?&#8221; plus a survey no one reads.)</p></li></ul><p><strong>In short:</strong></p><p>Weekly calls and surveys might keep the relationship warm - but they won&#8217;t show you who&#8217;s stuck, where, or why. Adoption is the real metric, and you can&#8217;t fix what you can&#8217;t see. The partners who win are the ones who move faster, with precision. Not more meetings - just better ones, built on evidence.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/why-post-deployment-adoption-fails/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/why-post-deployment-adoption-fails/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Availability Bias in Customer Interviews]]></title><description><![CDATA[Why your loudest customer might be misleading you]]></description><link>https://blog.autoplay.ai/p/availability-bias-in-customer-interviews</link><guid isPermaLink="false">https://blog.autoplay.ai/p/availability-bias-in-customer-interviews</guid><pubDate>Thu, 18 Sep 2025 13:03:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/718cbb0b-c47b-4f52-a796-6b374f5b7ad5_1024x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In psychology, <em>availability bias</em> is when we assume something is common just because it&#8217;s memorable.</p><p>You hear about a plane crash and suddenly flying feels risky - even though it&#8217;s statistically the safest way to travel.</p><p>Nothing changed about reality - your brain just latched onto something vivid.</p><p>This shows up all the time in customer interviews.</p><p>You speak to a user who tells a compelling story about a painful experience with your product. It&#8217;s detailed. It&#8217;s emotional. It sticks in your mind. And suddenly, you&#8217;re prioritizing it like it&#8217;s a systemic issue.</p><p>But is it?<br></p><h2><strong>Where availability bias skews product priorities</strong></h2><p>This is where things get expensive. You might:</p><ul><li><p>Shipping a major feature overhaul to fix a problem that only 4 users have</p></li><li><p>Ignoring a small UI tweak that&#8217;s confusing hundreds of people a little bit every day</p></li></ul><p>You feel like you&#8217;re solving something important, but you&#8217;re actually just solving something <em>memorable</em>.</p><p>Loud &#8800; common. And <em>intensity of pain</em> isn&#8217;t the same as <em>breadth of impact</em>.</p><p>What matters is <strong>how many users it affects</strong>, and <strong>which users</strong>.<br></p><h2><strong>How to check yourself</strong></h2><p>Before jumping to solutions, ask:</p><ul><li><p><strong>Is this representative?</strong> Who am I actually speaking to?</p></li><li><p><strong>Is it confirmed elsewhere?</strong> Can I find this issue in analytics or session data?</p></li><li><p><strong>Am I leading them?</strong> Are my questions loaded with assumptions or fishing for a specific answer?</p></li></ul><p>Because if your inputs are biased, your roadmap will be too.<br></p><h3><strong>Who should you be listening to?</strong></h3><p>Customer interviews aren&#8217;t useless - they just need context. Here&#8217;s how to avoid the bias and make them useful:</p><h4><strong>1. Start by defining your cohort</strong></h4><p>Not all users are equal. A power user with high LTV or expansion potential should absolutely get more weight than a free-tier lurker. Before you even run an interview, know:</p><ul><li><p>What plan they&#8217;re on</p></li><li><p>How active they are</p></li><li><p>Their role and team size</p></li><li><p>Their potential to expand or churn</p></li></ul><p>You&#8217;re not just validating problems - you&#8217;re validating them <em>for the right user type</em>.<br></p><h4><strong>2. Don&#8217;t mistake a good story for good data</strong></h4><p>Ask yourself:</p><ul><li><p>Is this the first time I&#8217;ve heard this issue?</p></li><li><p>Are they describing something measurable (a drop-off, bounce, hesitation)?<br>Are they generalizing (&#8220;everyone on my team struggles&#8221;) or just speaking personally?</p></li></ul><p>Record your interviews, transcribe them, tag specific quotes - and always cross-check them with behavior.<br></p><h4><strong>3. Use session data to verify it</strong></h4><p>Let&#8217;s say a user complains about a confusing part of onboarding.</p><p>With Autoplay, you can:</p><ul><li><p>Pull up that user&#8217;s session</p></li><li><p>Tag the friction point</p></li><li><p>Check how many <em>other users</em> hit the same blocker</p></li><li><p>Slice it by cohort, is this just new users? Just self-serve accounts? Just EU customers?</p></li></ul><p>Now you&#8217;re not just taking someone&#8217;s word for it - you&#8217;re seeing how widespread the problem really is.<br></p><h2><strong>Qual + quant: how to make decisions that scale</strong></h2><p>To know if feedback is valid (and worth prioritizing), run it through both lenses:</p><h4><strong>Qualitative</strong></h4><ul><li><p>User interviews</p></li><li><p>In-app feedback forms</p></li><li><p>Support tickets and chat transcripts<br></p></li></ul><h4><strong>Quantitative</strong></h4><ul><li><p>Session replays</p></li><li><p>Drop-off analysis</p></li><li><p>Click maps and scroll depth</p></li><li><p>Funnel conversion</p></li><li><p>Tag frequency in Autoplay<br></p></li></ul><h4><strong>Example:</strong></h4><ul><li><p>3 power users complain about the export feature - all high LTV, all churn risk.</p></li><li><p>Only 5% of all users use that feature, but 80% of enterprise accounts do. That&#8217;s a fix worth prioritizing.<br></p></li></ul><h2><strong>Don&#8217;t let one user set the roadmap</strong></h2><p>Interviews are a signal, not a decision.</p><p>Use them to generate hypotheses, not conclusions. Always verify with behavior, and always ask: <em>how many users is this really affecting?</em></p><p>And <em>which ones?</em></p><p>Availability bias makes you think you&#8217;re being user-centric, when really, you&#8217;re just being reactive.<br></p><h2><strong>Final thought</strong></h2><p>Not every loud complaint is a sign of a big problem.</p><p>And not every small annoyance is insignificant.</p><p>Listen to your users. But don&#8217;t let one vivid story speak for all of them.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Onboarding Roast: Intercom vs Crisp 🚀]]></title><description><![CDATA[One of the key things we build towards is getting users time to value counting down from the moment they press sign up on your software.]]></description><link>https://blog.autoplay.ai/p/onboarding-roast-intercom-vs-crisp</link><guid isPermaLink="false">https://blog.autoplay.ai/p/onboarding-roast-intercom-vs-crisp</guid><dc:creator><![CDATA[Sam nesbitt]]></dc:creator><pubDate>Tue, 16 Sep 2025 15:03:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5f6fbda7-bb13-4caf-abd6-2fd6916b49a7_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I thought it would be fun to play a weekly game with our AI insights:</p><p>I get to roast some companies&#8217; onboarding experience, test autoplays&#8217; AI insights and send a feedback loop to my AI engineering team - TRIPLE win.</p><p>&#128640; Onboarding Roast or Toast:<br>&#9201;&#65039; 5 minutes per tool + self recording<br>&#127919; One clear intention.<br>&#128249; Documented Burns + Earns.<br></p><h4>Last week&#8217;s intention: </h4><p>Find a provider to build an AI-powered chatbot trained on my help center.<br></p><h2><strong><a href="https://www.linkedin.com/company/intercom/">Intercom</a></strong></h2><h3>Burns:</h3><ul><li><p>I got frustrated by the page loading time at the start.</p></li><li><p>I got confused on submitting my url for the knowledge base.</p></li><li><p>The onboarding checklist was only accessible from the homepage. When I needed a reminder of the next step, tab-switching created disorientation and broke my flow.</p></li><li><p>The &#8221;guidance&#8221; section on configuring the chatbot tone was confusing. I couldn&#8217;t tell if tone was applied globally or to certain customers. Without more context or walkthroughs, I was left second-guessing.<br></p></li></ul><h3>Earns:</h3><ul><li><p>The checklist itself was great - showed me what was possible and gave a clear picture of the journey ahead.</p></li><li><p>The UI quickly aligned with my intention: setting up an AI-powered chatbot on my knowledge base.</p><p></p></li></ul><p>Did I Get to Value? <br>No</p><h3><br>Autoplay Analysis:</h3><p><strong>(<a href="https://lnkd.in/dqTqgxkr">https://lnkd.in/dqTqgxkr</a>)</strong></p><h4>Burns:</h4><ul><li><p>It didn&#8217;t pick up that I got confused on the "Guidance" section and that the "Guidance" section UI is what threw me off and caused me to go back to the home page to re-orient myself.</p></li></ul><h4><br>Earns:</h4><ul><li><p>It picked up that the loading time of page responsiveness caused frustrations on my end: &#8220;The user exhibited impatience during loading sequences by clicking on new sections before the previous one had fully loaded&#8221;.</p></li><li><p>It picked up that I was struggling with submitting my URL: &#8220;The user might have been unsure about the precise format or correct URL needed for synchronization&#8221;.<br><br></p></li></ul><h2><strong><a href="https://www.linkedin.com/company/crisp-im/">Crisp IM</a></strong></h2><h3>Burns:</h3><ul><li><p>I couldn&#8217;t upload a simple URL to create my knowledge base. That&#8217;s all I wanted: to drop in a url link and go. But I couldn&#8217;t get it to work.</p></li><li><p>I was taken to the workflow section before I even had a v0 of my chatbot set up.<br></p></li></ul><h3>Earns:</h3><ul><li><p>They asked about my intentions at the very start, so the flow felt personalized from the beginning.</p></li><li><p>I loved that the onboarding checklist was available everywhere. I always knew what I could do next and in what order.</p></li></ul><p></p><p>Did I Get to Value? <br>No<br></p><h3>Autoplay Analysis: </h3><p><strong>(<a href="https://lnkd.in/dNgCUDFG">https://lnkd.in/dNgCUDFG</a>)</strong></p><h4>Burns:</h4><p>:)<br></p><h4>Earns:</h4><ul><li><p>It understood that the url submission is what prevented me from completing my task: &#8220;When attempting to import content into their knowledge base via URL, the user was prevented from proceeding by a system error stating that a custom domain was required&#8221;.<br></p></li></ul><h2>The onboarding flow I would have loved instead:</h2><ol><li><p>Login</p></li><li><p>Select/write my intention</p></li><li><p>Submit the url of my help-center for the chatbots&#8217; knowledge base</p></li><li><p>Immediately interact with the chatbot trained on my knowledge base to get a feel for it and understand how its responding</p></li><li><p>Only then start to customize tone and response format etc </p></li></ol><p></p><h2>This triggered some interesting ideas linked back to Autoplays&#8217; offering:</h2><p>Currently Autoplay works backwards from your action sequences on software and compares them with similar users to infer your intent out of all the 1&#8230;.n possible intentions in the software. </p><p>But we could boost its speed and accuracy by integrating with the intention forms you fill out during onboarding - so the AI instantly knows exactly what you&#8217;re trying to do.</p><p>This would give the AI more context, and feed the real-time copilot instant intention data to steer  the user in the right direction on the software.</p><p>Autoplay could also work faster once it locks in your goal, spotting friction points and tailoring everything to your specified intention.</p><p>Any thoughts? Feel free to drop them in the comments below :)<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/p/onboarding-roast-intercom-vs-crisp/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/p/onboarding-roast-intercom-vs-crisp/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.autoplay.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.autoplay.ai/subscribe?"><span>Subscribe now</span></a></p><p><br><br> </p>]]></content:encoded></item></channel></rss>