How to Build Custom Software | Active Logic Insights
Most agencies will sell you a generic “discovery, design, build, launch” diagram and skip the parts that actually decide whether your project succeeds. The real process has more steps, longer pauses, and at least three places where projects quietly go sideways. Here is how the build runs at Active Logic, where the failure modes hide, and what you actually own at the end.
The First Sales Call
The first call is short, fifteen to thirty minutes, no NDA. We are not selling you on us yet. We are figuring out whether the project you want makes sense and whether our pricing is in your budget. I tell prospects on this call what they should expect to spend: $150,000 as a baseline, $250,000 to $350,000 for a project with bells and whistles. If your reaction tells me the project is going to be a stretch, I say so on the first call rather than burn an hour of your time on a proposal you can’t fund. The full version of that math is in how much does custom software cost.
This call exists to respect both sides’ time. Prospects who close at our number are the ones who needed custom software anyway. Prospects who do not close at our number are usually better served by a different vendor or a different approach, and we will tell them so.
The Deep-Dive Discovery Call
The second call is the actual discovery. NDA in place. Screen sharing. Engineers from both sides. We bring someone with serious depth (often me, with 23 years in this work) so the technical questions get real answers, not sales answers. We prefer in-person when we can manage it, which is why we have offices in Kansas City and Miami. Whiteboards beat slides for this stage. We want to look you in the eye, shake your hand, and convince you we are real, because software is non-tangible and the entire engagement runs on trust.
The goal of this call is to understand your business: what software you have today, what is broken, what your team works around, what you are actually trying to accomplish. Sometimes the prospect arrives with a written spec. Almost always they arrive with Excel sheets that have been doing the job for years. Both are useful. Both feed the proposal.
The Proposal Phase
After discovery, we go back and forth, sometimes with one or two clarification calls, until we have a proposal that reflects what you actually need rather than what we initially thought you needed. Our proposals are sprint-based, not flat-rate. We do not do fixed-bid pricing on custom software, because fixed-bid pricing rewards cutting corners and punishes the scope clarification work that produces good software. We estimate conservatively: under-promise, over-deliver. A proposal that says “six months” when the real answer is three loses the deal. A proposal that says “three months” when the real answer is six destroys trust. We try to land in the middle and tell you when we don’t know.
The Kickoff Call
Once you sign, we schedule a kickoff. This is the operational hand-off. It covers how we work, when we work, how invoicing runs, what the communication cadence looks like, and who the points of contact are on both sides. It is short and unglamorous, and it sets the rhythm for the rest of the engagement.
The RDP Phase: Research, Design, and Planning
The next two to three weeks are what most agencies call discovery and what we call RDP. It is deeper than the pre-proposal discovery call. We architect the database, design the user interface, scope integrations, gather technical documentation, and plan out the sprint sequence for the entire project at a high level. For projects with hardware or sensor integration, we figure out exactly which device protocols, signal types, and data structures we are working with. RDP is where projects that are going to go well start setting themselves apart from projects that are going to go badly.
Two-Week Sprints with Pulse Reports
Once RDP is done, we move into two-week sprint cycles. Our process is agile, but not the kind of agile that produces “we have no idea how long anything takes.” Each sprint plans concrete deliverables. Every two weeks we ship a pulse report to leadership covering what was done, what is next, what is blocked, what we need decisions on, and what is moving to QA. Monthly budget check-ins compare actual burn against the plan.
Halfway through most projects, we look way ahead of schedule. That is not because we are fast. It is because scope creep and edge cases compound at the end. Our team plans for this. We do not over-promise based on the optimistic mid-project pace.
Continuous QA from Day One
QA is not a final phase. The instant we have something on a staging server, our QA team is testing. We also have engineers writing tests at multiple levels (unit, feature, codebase) throughout development. Clients test on staging too, because they understand their business better than we do and they catch things our team cannot. The blunt truth: bugs that escape internal QA usually escape because they live in business-context territory we have not yet learned. Two or three weeks into a three-month project, QA is already finding things.
The deeper version of how we think about bugs at this stage is in bugs, edge cases, and the beta state.
Launch in a Beta State
When the client is ready, we launch. We tell every client before launch: this is software, it will not be perfect, and the maturation curve runs for months. Even Gmail was in beta for years. We hot-fix what we find. We deploy through a continuous integration pipeline so updates ship without downtime. We are on call and on standby through the first weeks.
A recent example. We launched a text-message-based system for a logistics company that runs dump-haul services across multi-state routes. Drivers do not want a mobile app. Administrative staff needs a real web app for invoicing and client management. We built both: SMS for the drivers, full web portal for the office. The driver side is queue-based and architected to handle thousands of messages in sequence, because that is what production scale actually demands. A vibe-coded prototype handles ten messages and crashes at a thousand, which is why vibe coding is not enough for production software. Production architecture is what you are paying for.
Handoff and Ownership
The part most buyers do not think to ask about until it is too late.
When the engagement ends, here is what you actually have:
- The code. You own it outright once the engagement is paid. We retain rights to a small set of foundational layers (auth scaffolding, internal libraries we reuse across projects), and everything else is yours. Your business logic, your workflows, your integrations: yours.
- The infrastructure. Production runs on your hosting account, not ours. Your AWS, your Vercel, your Azure. We get access to set things up. You control billing, you control access, you control everything.
- The CI/CD pipeline. Same model. Yours, set up by us.
- Documentation. Two layers: developer documentation (architecture, services, conventions) and user documentation (how the software works for your team). Depth depends on what you ask for. We also keep an automatic engagement journal in the form of pulse reports, which means you have a complete week-by-week timeline of every decision, every change, and every blocker for the entire engagement.
The honest answer about switching: if we disappeared tomorrow, you could hand the codebase to a competent next agency and they could pick it up cleanly. A few weeks of ramp-up, plus a fresh discovery, and they would be productive. Not zero friction, but not lock-in either.
The dishonest answer most companies hear about previous agencies’ code: “this is garbage, we have to rewrite it.” We have audited a lot of “garbage” codebases and most of them are fine. The actual reason agencies want to rewrite is that learning someone else’s code takes weeks, and rewriting feels faster than understanding. It usually is not. Rewrites are usually laziness, not bad code.
Rescue Projects and the Dissection Phase
Sometimes a half-built project lands at our door because the original team failed. The state varies: sometimes the freelancer disappeared, sometimes offshore communication broke down, sometimes the previous agency could not understand the business. The code is usually fine. The relationship is usually what failed.
We run what we call a dissection phase before quoting a rescue. It includes a free code audit in the proposal phase, scored against an 80-to-90-item production-grade checklist we maintain internally. The audit tells us whether the existing codebase can be finished faster than rebuilding from scratch. Sometimes it can. Sometimes the architecture is fundamentally wrong and the honest answer is “you spent $100,000 already, and getting to launch from where you are will cost more than starting over.” That is a hard conversation.
We walk away when the audit finds critical security flaws and the remaining budget is not enough to fix them. We are held liable for software we maintain, and we will not put ourselves in legal jeopardy to take a contract we cannot deliver responsibly. Sometimes we walk away when the rebuild cost exceeds the budget and the client cannot extend it. We will not bill against a doomed timeline.
Post-Launch and Ongoing Evolution
Software is not a one-time purchase. Eighty to ninety percent of our clients stay engaged with us after launch, because their business is not software, their business is whatever their business is, and they want their software to keep evolving in lockstep. The engagement model varies:
- Ongoing weekly retainer. Continuous feature development, AI integrations, evolution.
- Spurts. Six months on, three off, six on again. New features ship in batches.
- Ad-hoc or hourly. Maintenance only, with us on standby for emergencies.
The reason this beats hiring in-house for most mid-market companies: a non-technical founder cannot reliably vet senior engineering talent, the cost of a bad hire is enormous, and the cost of carrying full-time hires through low-need periods is wasted. The Team as a Service engagement model is built around this reality.
Where to Go Next
Process is one piece of the buy decision. The other two are whether custom is the right call in the first place (the pillar article covers the 80% rule for buy versus build) and what custom software actually costs (the cost article covers honest ranges from freelancer through enterprise). If you are in the vetting phase, our how to vet a software development company article goes deeper on what to ask before you sign.