My Newest Colleague Is A Lobster

My Newest Colleague Is a Lobster — And It Monitors 197 Funds Before Breakfast

What building an AI analyst from scratch taught me about the future of investing

It was 1:47 AM on a Tuesday, and I was arguing with a lobster.

Not a real lobster, of course, but an open-source AI agent called OpenClaw, whose mascot is a space lobster named Molty.¹ For the previous three days, I had been trying to bring it to life on a Mac Studio. By that point, my wife and children were asleep, the dog had given up on me, and I was still trying to teach the system how to parse a fund factsheet that used commas instead of decimal points.

Then it worked.

The bot retrieved the PDF, extracted the data, identified a performance deviation, and posted an alert to our Microsoft Teams channel without me touching anything. I sat back and had one of those rare moments in a career when you realize something fundamental has shifted. This was not an incremental productivity gain — it was a different operating model: an AI analyst monitoring our fund universe continuously, without fatigue and without needing to be reminded what to look for.

In fund investing, someone always has to be watching.

At our firm, that meant monitoring 197 funds across multiple currencies. Each cycle involved gathering factsheets and commentaries, checking for performance deviations, changes in assets, key-person developments, strategy shifts, and regulatory flags. It is essential work, but it is also soul-crushingly repetitive.

We use off-the-shelf tools that support parts of this process, but even with those, a meaningful amount of work remained manual: collecting documents, reconciling information across sources, surfacing exceptions, and making sure nothing slipped through. I was convinced there had to be a better way. I just did not expect the answer to come from an open-source AI project with a lobster as its mascot.

Why OpenClaw got my attention

What interested me most about OpenClaw was not the branding or the novelty. It was the architecture.

The platform is built around modular "skills": instruction sets written in plain English that define how the agent should perform specific tasks. In practical terms, that means you can teach it workflows such as scraping a website, parsing a PDF, extracting key fields, summarizing what matters, and posting the result to a collaboration tool. Rather than building every component in code, you define the process, the logic, and the expected output in natural language.

For an investment monitoring workflow, that was a powerful idea. The challenge was no longer whether the software existed. The challenge was whether I could accurately describe the job. (You would think that part would be easy. It was not.)

Two weeks, built in plain English

One of the striking things about building with AI tools in 2026 is that traditional coding is no longer the only way to create useful systems. You still need rigor, clear thinking, structured workflows, and a willingness to test and refine. But you do not need to write the software yourself.

That is how this system came together: iteratively, skill by skill, with a combination of structured instructions, testing, and revision.

Today, the bot performs five core functions.

Document collection. It retrieves factsheets and commentaries for roughly 200 funds from manager websites, API endpoints, and email via IMAP. It checks continuously, it does not forget to look, and it never complains about a manager's website being down at 3 AM. All documents are readily available to download from a dashboard within Teams.

Parsing and extraction. Each PDF is processed first through a local model running on our own hardware, which helps control cost and preserve privacy. Only higher-complexity cases are escalated to a more powerful cloud model.

Exception monitoring. The system evaluates each fund daily against 21 configurable threshold rules loaded from a shared workbook in Teams. Performance deviations beyond 200 basis points, drawdowns above 15%, asset swings greater than 10%, key-person departures, strategy changes, and regulatory issues can all trigger alerts.

News surveillance. It runs daily sweeps across liquid fund managers and weekly deep dives on 32 private market firms, filtering noise and surfacing only material developments.

Commentary assessment. It not only collects and summarizes the latest manager commentaries but also assesses the tone (confident? defensive?) and how it changes over time.

Reporting. By 6:30 AM CET, a seven-tab Excel dashboard is published to Teams (SharePoint) so the team begins the day with a complete briefing.

Development took two weeks of evenings and weekends. Ongoing cost is minimal: a small amount of API spend and the electricity required to run a Mac Studio.

Under the hood, the system is less magical than it may sound. The scheduling, downloading, threshold checks, and alerting are rules-based. The AI is used where the work is genuinely messy: interpreting unstructured PDFs, extracting information from inconsistent formats, and summarizing what a human reviewer needs to know. Reliable automation comes from combining deterministic controls with AI where judgment and flexibility are actually needed.

The most important consequence of this design is not just efficiency. It is accessibility. Because the platform works through natural language, team members can describe new requirements without writing code or editing a spreadsheet. An analyst who wants a new fund added, a different alert threshold, or a revised summary format simply tells the bot what they need. The bot drafts the change, we review and deploy it, and suddenly the entire team is collectively shaping its own monitoring system. No IT ticket. No developer. No waiting. Plain English is the only programming language required.

A necessary caveat: this remains experimental

That said, none of this should be mistaken for a finished, low-risk product.

Our system is a prototype. It is useful and increasingly capable, but it remains an experiment. And more broadly, the agentic AI ecosystem has already shown why caution is essential.

During setup, the bot repeatedly requested broad permissions: full disk access, terminal control, browser automation, and email access. It was like hiring a brilliant new analyst who, on their first day, asks for the master key to every office, the company credit card, and your email password. In practice, I spent almost as much time restricting permissions and hardening the environment as I did building functionality. OpenClaw has no access to sensitive data, no passwords, not even to my regular Wi-Fi.

The wider ecosystem has produced enough warning signs to take this seriously. Cisco's security team found third-party skills that silently exfiltrated user data to external servers.³ Researchers identified that roughly one in five skills on OpenClaw's public marketplace contained malicious code, disguised behind professional documentation while installing keyloggers and credential stealers. A critical vulnerability (CVE-2026-25253) enabled one-click remote code execution: a single visit to a malicious website could give an attacker full control of the agent. Microsoft's Defender team recommended that most organizations simply not deploy OpenClaw, and if they do, to treat the system as one that could be compromised at any time. Platform maintainers themselves have warned that tools like this can be dangerous in the hands of users who do not fully understand the operational and security implications.

If you do not understand the permissions you are granting, the systems you are exposing, and the risks you are creating, you should not experiment casually with agentic AI.

What I learned at 1:47 AM

Building this system changed the way I think about AI in investment management.

First, the capability is real, but the governance is not yet mature enough. The technology can already perform meaningful operational work at a level that would have seemed improbable not long ago. But the controls, standards, and security practices around it are still catching up. According to a 2025 global survey, 73% of asset management executives say AI is critical to their organization's future, yet fewer than 10% are currently using agentic AI. McKinsey's research found that while nearly 90% of companies have invested in AI, fewer than 40% report measurable gains, largely because most are applying it to discrete tasks rather than redesigning how work gets done.

Second, AI amplifies human judgment more than it replaces it. Our bot processes more information before breakfast than a team could review in a week. But it still does not decide what belongs in a portfolio, how a risk should be weighed, or how a client conversation should be handled. Those remain human responsibilities.

Third, democratization is both the promise and the risk. I built this in two weeks without traditional software development. Five years ago, that would have required a team of engineers and a six-figure budget. That is remarkable. It also means people with very little security experience can now assemble systems with real access to files, email, browsers, and workflows. The barrier to entry has fallen faster than the barrier to harm.

Fourth, trust in these systems must be earned operationally. Least privilege, controlled environments, review processes, logging, and ongoing monitoring are not optional extras. They are the price of admission.

The next morning

When I was finally ready to close my laptop that Tuesday night, the bot noticed the time and wished me good night. It told me it was finishing the last downloads and would have a summary ready by the morning.

By the time I woke up, it had already flagged two funds for review and posted a clean dashboard to SharePoint.

My newest colleague does not need coffee. It does not take holidays. It does not overlook something because it is tired or distracted or thinking about lunch. It works quietly, consistently, and for the price of a modest sandwich.

The future of investment management is not about replacing the people responsible for client outcomes. It is about equipping them with better tools: broader coverage, less manual effort, earlier risk detection, and more time for the judgment calls that actually matter.

The future is arriving fast, and it is powerful. It may also be a little dangerous. The right response is not fear or hype, but thoughtful adoption, strong controls, and a healthy respect for what these systems can do.

And occasionally, those systems come with claws.