8 Lessons Learned: Promising Signs for Entrepreneurs
Dec 28, 2024
6 min read
1
10
0
After six weeks of piloting our chat-based financial tracker and learning assistant across Myanmar and Nigeria, we’ve learned that getting entrepreneurs to choose a tool—without offering financial incentive through grants or subsidized loans—requires more than flashy technology. It demands a sort of alchemy that blends familiar, flexible conversation flows and structured, guided interactions, and provides immediate and obvious value. We're serious about making a real impact on small business owners around the world - one that's scalable and viewable in macro-level numbers like access to finance through risk premiums. So we're sharing eight lessons learned from our early pilot tests with business owners that might help you along your journey to make an impact to entrepreneurs everywhere.
1. People Need Intrinsic Value - Not Handouts
When you remove direct or indirect financial incentives, you get a clear signal of whether your solution genuinely helps. We welcomed a smaller-yet-authentic user group that gave a pure, but harsh signal on value produced. In short, we need to make a lot of improvements to remove onboarding hurdles, demonstrate value earlier, and provide better features that retain and bring people back.
Key Takeaway: It may look ugly in early numbers, but unvarnished feedback from truly unbiased users helps us steer toward features that actually matter to entrepreneurs. This is similar to 'skin-in-the-game', and makes us all accountable for truly good or bad outcomes.
2. Mixing Newly-Possible LLM Flexibility with Old-Fashioned Structure
Our biggest technical revelation was the need to blend open-ended LLM-based interactions with a more defined, traditional flow. Too much rigidity stifles natural conversation; too much open-endedness confuses users. Looking to tools like Quickbooks, Xero, and more, data input starts with form-based, highly rigid structures. It has always been a requirement to get the data into the proper format.
But now, with node-and-edge architectures like LangGraph, we can define a structured path but still support user-driven detours that mimic familiar conversation formats. This opens an incredible opportunity to rethink user interactions from the very beginning - and it's only just now possible.
Flexible Chat Flow
We saw how real entrepreneurs jump around, such as wanting to quickly undo an entry or add a shipping fee. A purely linear script can’t handle that.
Defined Steps & Quick Replies
At the same time, having suggested buttons or a short list of “next options” keeps users from being overwhelmed. This structure reduces cognitive load.
Key Takeaway: By narrowly defining each agent’s role but allowing LLMs to handle open-ended user inputs, we preserve user freedom and guide them through crucial steps. It also improves the developing AI agent performance by constraining and clarifying steps. With a smaller "area" to cover, the agent has less "corners" to get stuck.
3. Technical Lessons from Our Early “Basic Agent” Architecture
During this pilot, we used a relatively simple approach with a singular AI agent—no sophisticated node-and-edge flows—so we created some concessions around a single rigid conversation path to improve performance. We quickly found ourselves losing the main benefit of chat-based conversations: true flexibility. Our future versions are implementing versions of LangGraph that better balance the need for rigidity with flexibility in a more user-friendly way that maintains performance.
Rigid = High Friction
Our naive, pathwise logic forced everyone through the same sequence (revenue, then expenses, then assets, etc.). Users struggled and gave feedback that they 'weren't finished with revenue and felt forced into expenses before they were ready.' In reality, the AI agent could have handled users continuing to enter revenue, but the flow made this seem impossible. Regardless, this is not how real conversations flow, and it lost much of the flexible benefit we were seeking.
Real-World User Behavior
The pilot’s data confirmed the value of the node-and-edge approach we’re implementing next. It will let us maintain the conversational flexibility without swinging too far into the old-fashioned methods.
Key Takeaway: A well-designed multi-agent structure will not only perform better, but it will fit into a more flexible user journey that's better aligned to how entrepreneurs think and act in daily life. Our initial version replaced existing, rigid processes with chat-based versions of the same thing. This didn't move the needle for our testers. Overly-simplistic agent architectures can't handle this approach, and we need far more robust frameworks like LangGraph to achieve these ambitious goals.
4. First Impressions Matter: Show Value Fast
No matter how great your tech is, if you can’t show entrepreneurs how it helps them in the first 30 seconds, they’re gone. That means:
Immediate Value Demo
In our pilots, we prioritized longer onboardings to gather user data needed for personalization features. But rather than asking them 10 setup questions, we’re now giving them a personalized tip or snapshot of their finances right away. This encourages them to enter more data, but doesn't require immediate form-filling that can lead to high dropout rates.
Less Cognitive Load
We learned that incremental personalization works better than a massive upfront data capture.
Key Takeaway: Provide a quick “win” before you ask for user data. Let them see how this will help in their day-to-day business to create buy-in.
5. Consistency & Hooks: Reminders Mean Retention
Busy entrepreneurs won’t log finances daily unless you remind them. Just like Duolingo, well-timed nudges keep them engaged. A “What were yesterday’s sales?” prompt will likely make a huge difference between consistent usage and total abandonment.
Key Takeaway: Building “pull” mechanisms may matter more than your core feature set. Even the best tool gathers dust without ongoing reminders.
6. Multi-Entry & Real-Time Edits: AI’s Strong Suit
Users often rattled off several revenue items in one message, then decided to subtract a shipping fee moments later. Our LLM-based approach let them do this in a free-form manner (e.g., “Oops, discount that last item by 3,000”), and the back-end sorts through the structuring so users don't have to worry about the tedium of this task. Additionally, a proper, agent-based architecture allows various tools that can better handle math-based steps that are notoriously tricky for LLM-only models.
Key Takeaway: This is the Lindy Effect in action—chat-based conversation is as old as language itself. By pairing that time-tested mode of communication with modern AI, and infused into a financial process, we’re transforming financial tracking from abrasive, rigid forms into a familiar, intuitive exchange.
7. Genuine Interest in AI-Driven Learning Assistant
Although we barely promoted our built-in “AI learning assistant,” most users found it on their own, and engaged multiple times.
Key Takeaway: There’s clear demand for dynamic, AI-driven business guidance. Integrating with a robust learning assistant, and AI-driven business mentor could be a major differentiator.
8. Minority Language Handling Is Likely “Good Enough” for Constrained Tasks
Many of our test users interacted through non-Latin scripts and non-English languages. While imperfect, we found multiple return users, indicating that our processes are likely sufficient for a constrained application like financial tracking.
Key Takeaway: Combining prompt engineering, LLMs, and Machine Translation can create valuable, yet personalized applications in hundreds of languages. It's still more costly than English, but these models are getting better every month, and costs are decreasing in a way that can provide high-quality support to minority language groups at a depth and breadth that was never possible before.
Closing Thoughts: Embracing the Lindy Effect
Nassim Taleb talks about how the Lindy Effect shows that things which have stood the test of time are likely to endure - especially in technologies that seek to emulate longstanding practices that have stood the test of time. People have always used simple, flexible conversation to talk about money, solve problems, and correct mistakes. We’re harnessing this familiar “chat” model for a task that has historically been bogged down by complex financial software.
User-Centric Iteration
Every improvement we make is grounded in actual pilot feedback—no forced usage, no hidden incentives, just raw data on what’s working and what’s not.
Sustainable Adoption
If the tool truly mimics the natural way people talk about finances, we can outlast any fad and deliver consistent value.
Key Takeaway: The best way forward is continued integration of proven, familiar conversational patterns with the cutting-edge power of specialized LLM agents that allow us to apply this pattern to areas that were previously too difficult - like financial tracking and personalized learning in multiple languages.
Looking Ahead
These new lessons have us eager to deploy a more advanced, node-and-edge-based agent design. We’ll preserve the flexibility of open-ended chat while maintaining enough structure to guide entrepreneurs effectively. Additionally, we're revamping our user onboarding and data editing features to better integrate with real-world time constraints and realities of entrepreneurs that don't have time to give new tools a chance unless they immediately feel the value.
If you’re an ESO or potential partner looking to empower MSMEs through a product that thrives on genuine, user-led adoption (not just grand incentives), we’d love to connect. Together, we can leverage the Lindy Effect—time-tested conversations—infused with AI-empowered personalization to create a transformative experience for small businesses worldwide.