

“Once we attached Claude to Bauplan, the first question our operations team asked was: which client is most likely to churn? Claude queried Bauplan and predicted a client that had churned literally one day before. That was the moment everyone got it.”
Moffin is a FinTech platform that provides credit bureau data, identity verification, and risk workflows to financial institutions across Latin America: when lenders need to make a financial decision, they reach out to Moffin to get the relevant data.
Moffin’s data engineering team supports every function in the company: leadership needs data to plan company growth, the operations team wants to track customer health, marketing has to measure the ROI of campaigns, and finally the engineering org is constantly monitoring service reliability.
Before Bauplan, every data question followed the same slow path: someone had an idea, asked the data team, waited for a query, got a one-off answer, and then decided if it was worth turning into a dashboard. The data team was swamped with ad-hoc, non-foundational work, and most ideas never made it past step two.
In other words, analytics intelligence (how to query and report on polished data) and data engineering intelligence (how to prepare polished data from raw signals and systems powering the business) were coupled inside the data team. Bauplan and Claude let them decouple: analytics requests are handled through AI, so the data engineering team is free to produce high-value, repeatable work.
.png)
Carlos and the data engineering team leveraged Bauplan to easily build a lakehouse architecture. When raw files—customer credit bureau data, internal application metrics, Stripe billing, Monday.com project tracking, and service health logs—land in S3 through DLT connectors, a few lines of Bauplan code perform Quality-Gated Updates to create verified Apache Iceberg source tables.
Source tables are then turned into silver and gold layers through pipelines, producing the final assets for downstream systems: unlike files, tables have authors, governance, lineage, and versioning. As building a data estate with Bauplan is “just code”, the investment in good code turns out to pay high dividends across the full data chain: clear naming conventions, proper namespacing, good column definitions, curated gold layers - what is easy for humans to reason about is also easy for AI to understand.
After the release of Bauplan AI integrations—MCP and Skills —Carlos connected Claude Desktop to Bauplan and set up a demo with the operations team. The very first question was about churn risk. Claude examined the gold layer, reasoned about which signals might indicate churn, built a final query, and returned a ranked list. The client at the top of that list had churned the day before.
That was the “aha” moment. Within days, both the leadership and operations teams were using Claude Desktop daily: the AI is translating a high-level business requirement in English into a set of probing queries and analytics scripts that leverage Bauplan APIs and concurrency model to answer promptly and at no additional data infrastructure cost. The first result is a principled division of labour: the data team works on running reliable, scalable, general data models, so that Claude can use the polished data to answer business questions directly and reliably.
The new analytics loop not only freed the engineering team to do scalable work, but also improved the engagement non-technical teams have with data-driven decision making: it is not just that the turnaround is now much quicker, but Claude’s explicit reasoning traces inspire users to refine their questions, explore new directions, and make their requests more precise.
The setup works so well that it's changing how leadership thinks about making advanced data capabilities available more broadly across the organization.
The flows crystallized in Claude+Bauplan sessions are today already precise enough to be turned into a scalable pipeline producing assets on a schedule. Exploration becomes a draft spec, which engineering hardens into a scheduled pipeline.
What about closing the loop with Claude as well? While today moving between Claude Desktop traces (exported by business users) and a new pipeline is done manually by an engineer, the team is experimenting with new Claude Skills that could significantly speed up the process.
Since traces, skills, pipelines, infrastructure, and table descriptions are all “just code” in a Bauplan project, Claude Code has everything it needs to take a first stab at it: what used to be days of refactoring can become a focused code and data review, using abstractions such as data branches and tags.