depuis la création du compte
Confiez votre projet à Amol
Faites appel à l'expertise d’Amol pour faire avancer votre projet, ou découvrez d'autres freelances pour trouver celui qui correspondra parfaitement à vos besoins.
SaaS development taking 6 months and still not shipped? I deliver working v1s in 14 weeks.
10+ products launched across UK, US, Germany. One won Product of the Year.
Most developers either abandon projects halfway, overcomplicate things, or choose the wrong tech stack. Delays, bugs, and technical debt you never signed up for. I prevent that.
THE RECORD
-> 10+ products launched in production across UK, US, and Germany
-> Award-winning carbon accounting platform (Innovative Product of the Year, Isle of Man)
-> AI-powered healthcare ERP with WhatsApp bot, GDPR and HIPAA compliant (Germany)
-> AI voice sales agent for European SaaS company (automated outbound calls)
-> Internal ops tools for chartered accounting firm (South Africa)
-> ETL pipeline: MongoDB to ClickHouse, query times reduced 90%
-> Multi-year client relationships (most clients work with me for 2+ years)
-> 14-week average MVP delivery, idea to deployed product
-> 6 client testimonials, zero negative reviews
WHAT I BUILD
Custom SaaS Platforms
Multi-tenant architecture, subscription billing (Stripe), user management, role-based access, admin dashboards. React/Next.js frontend, Node.js or FastAPI backend, PostgreSQL, MongoDB, Redis. Built to scale from 10 users to 10,000+.
MVPs and Product Launches
Idea to architecture in 1-2 weeks. Working prototype in 6 weeks. Deployed v1 in 14 weeks. I scope honestly. If it cannot be done in 14 weeks, I say so upfront.
AI-Integrated Applications
LangChain, LlamaIndex, LiveKit, RAG systems. AI agents with tool calling. AI voice sales agents. WhatsApp AI bots. GPT-4 and Claude integration into existing products. Production AI features, not demos.
DevOps and Infrastructure
AWS (ECS, EKS, Lambda, S3, EC2, RDS), Kubernetes, Docker, Terraform, Ansible. CI/CD with GitHub Actions and GitLab CI. Monitoring: Grafana, Prometheus, ELK. ClickHouse for analytics. Infrastructure as code from day one.
HOW I WORK
-> Architecture and scope defined before any code (1-2 weeks, no guessing games)
-> Weekly sprints with real checkpoints and working demos
-> Open communication. No ghosting. No scope creep surprises.
-> Decisions prioritize business needs, not the latest hype
-> Clean handoff: documented code, architecture decisions recorded, your team owns it
WHAT MAKES THIS DIFFERENT
Not an agency charging $100K+ for 6-month projects. Not a contractor who disappears after delivery. Not a dev shop treating you like ticket #54.
Technical partner who understands your business, explains decisions in plain English, ships fast without cutting corners, and stays with you from MVP to scale.
We designed and developed a custom, GDPR-compliant, AI-powered appointment-scheduling platform for a German healthcare organisation, built to replace manual booking processes.
Skills and deliverables
- Full-Stack Development
- AI Agent Development
- DevOps
Case Study: GDPR compliant AI-Powered Appointment Scheduling Platform
Healthcare / HealthTech • GDPR-Compliant • European Medical Practice
Confidentiality Notice: This case study discusses process decisions, architecture approach, and timelines only. No product features, user flows, or client-identifying details are disclosed. The client's intellectual property remains fully protected.
Industry Healthcare / HealthTech
Regulatory Environment GDPR (European Union - Germany)
Platform Type AI-Powered Patient Communication & Scheduling
Timeline 8 months - spec to production
Tech Stack React + Node.js + TypeScript
Key Integration WhatsApp Automation + AI Chatbot
Infrastructure AWS ECS · Docker · Encrypted
Key Result 70% reduction in manual appointment handling
How This Project Started
The founder came to us with a validated idea, seed funding, and clear product vision. They had already done their homework, researched the market, mapped their requirements, and picked a tech direction.
Their initial choice: Node.js with a Handlebars template library. It was familiar. It had worked on past projects. It seemed like the fastest path to production.
In the first conversation, before any code was written, before any contract was signed, we looked at the actual requirements: GDPR-compliant patient data handling, dynamic rendering based on consent status, real-time scheduling with WhatsApp integration, and an AI conversation layer that needed a responsive, component-driven frontend.
The recommendation: React for the frontend, Node.js for the backend.
Not because React is "better" in some abstract sense. Because for this specific product, with dynamic rendering requirements, consent-driven UI states, and real-time data flows, React's component architecture was the right fit. The Handlebars template approach would have worked for the first 3 months, then broken when the GDPR rendering requirements hit.
The founder agreed in 5 minutes once we explained why. That single stack decision, made before a line of code was written, saved an estimated 2 months of rework.
The Architecture Decision That Shaped Everything
This project followed a principle we apply to every healthcare build: the regulatory layer comes first, features come second.
Most developers approach GDPR as a compliance checkbox, something bolted on after the core features are built. For healthcare, that's backwards. GDPR isn't a feature. It's a data model.
We designed the entire data handling layer for GDPR compliance from day one:
- Dynamic rendering - showing different data to different roles based on consent status, in real time. This isn't a CSS toggle. It's an architectural pattern that determines how every component queries and displays patient data.
- Consent management - tracking what each patient has consented to and rendering the UI accordingly. Built into the data model, not added as a middleware layer after the fact.
- Data residency controls - ensuring patient data is stored and processed within the correct jurisdiction. Not wherever the cloud provider defaults.
Encrypted communication - between WhatsApp, AI services, and the backend. End-to-end, not just at rest.
- Audit-friendly data handling - every data access traceable, every consent decision logged, every modification recorded.
The alternative - building features first and adding compliance later - typically costs 3–4 months of rework on a healthcare build. We avoided that entirely because the foundation was right from week one.
What We Built
The platform architecture has three interconnected layers, each designed for the specific demands of a regulated healthcare environment.
WhatsApp + AI Chatbot Automation
We developed an AI-powered chatbot integrated with WhatsApp as the primary patient communication channel. The chatbot understands patient queries using natural language processing, handles common questions automatically, and manages the full appointment lifecycle, scheduling, rescheduling, cancellation, and instant confirmation.
The critical architectural decision here: building the AI conversation layer to work within GDPR constraints from the start. Every patient interaction follows consent protocols. No conversation data persists outside the compliant data model. The AI layer doesn't operate in a separate data silo, it reads and writes through the same GDPR-compliant data architecture as every other part of the platform.
This is where the React + Node.js decision paid off most visibly. The WhatsApp integration feeds real-time data into the React frontend through the Node.js backend. The component-based architecture means each piece of patient-facing UI respects consent state independently. With a template-based approach, this would have required a complete rendering rethink at the point GDPR requirements became non-negotiable.
Smart Scheduling Engine
At the core sits a custom scheduling system: real-time doctor availability management, conflict-free booking logic, and automatic propagation when appointments change. The scheduling engine connects directly to the AI chatbot, every booking is accurate and immediately reflected across all touchpoints.
The data model was designed for the practice's actual clinical workflow, not adapted from a generic scheduling template. This meant the system handles the real-world exceptions that generic tools handle poorly, overlapping availability rules, last-minute changes, multi-provider coordination, appointment types with different duration and preparation requirements.
Admin & Staff Portal
A lightweight internal dashboard gives clinic staff full visibility: appointment management, doctor availability configuration, chatbot conversation monitoring, and manual override capability when edge cases arise. Role-based access control ensures staff see only what they need, another GDPR requirement baked into the architecture, not bolted on.
Infrastructure & Security
The platform was deployed on Aws ECS infrastructure - chosen specifically because healthcare data requires full control over where and how sensitive information is stored. No shared hosting. No multi-tenant defaults.
The deployment architecture:
Docker-based containerisation - consistent environments from development to production, isolated services, clean deployment pipeline.
Automated backups - with verified restoration procedures. Healthcare data doesn't get a second chance.
Comprehensive monitoring and logging - for reliability and for audit compliance. When a regulator asks "what happened at this timestamp," the answer is immediate.
Scalable architecture - designed to support additional clinics without redesigning the compliance or data layers. The foundation holds whether it's one practice or twenty.
Results & Impact
70% reduction in manual appointment handling. The automation layer now processes the majority of bookings that previously required staff intervention, phone calls, callbacks, manual diary entries.
24/7 booking availability. Patients schedule appointments outside office hours through WhatsApp. The phone-call bottleneck, which created peak-hour backlogs and missed appointments, is eliminated.
Faster patient response times. The AI chatbot provides immediate responses where patients previously waited for callbacks during business hours. No more "we'll call you back."
Reduced administrative workload. Clinical staff now spend their time on patient care rather than phone-based scheduling. The practice didn't need to hire additional front-desk staff despite increasing appointment volume.
Scalable foundation. The architecture is designed to expand as a SaaS product serving additional practices, without rebuilding the compliance or data layers. The GDPR-first approach means every new clinic connects to an already-compliant infrastructure.
Why This Timeline Worked
8 months from first conversation to production
This wasn't because we write faster code. It was because the architecture was right for the regulatory requirements from day one.
The founder had quotes from other developers. Timelines ranged from 6 to 14 months. All "included GDPR compliance." None had specified what that actually meant for the data model.
Here's what those range differences usually mean:
The 6-month quote typically means GDPR as afterthought. Standard database, compliance bolted on. Works until the first regulatory review, then it's a rebuild.
The 14-month quote typically means over-engineered. Enterprise architecture for a startup that needs to ship. Every possible edge case handled before day one. Sounds thorough. Delays launch by half a year.
The 8-month reality meant GDPR in the data model from day one. Minimal but correct. Ships on time. Passes compliance because the foundation is right, not because the budget is big.
The most expensive line in a healthtech proposal is "includes GDPR compliance" with no specification of what that means for the data model.
The Stack Decision in Hindsight
Looking back, the single most impactful moment in this entire project was the first conversation, before the contract, before the architecture document, before a line of code.
The founder had picked Node.js + Handlebars templates. A reasonable choice based on familiarity. For a simpler product, it would have been fine.
For a GDPR-compliant healthcare platform with dynamic rendering, consent-driven UI states, and real-time WhatsApp integration, it would have cost 2 months of rework when the requirements outgrew the template approach.
The React + Node.js recommendation wasn't about preference. It was about matching the stack to the regulatory and product requirements. One conversation. One question at the right time. Two months saved.
That's the pattern across every build: the decisions that determine success aren't the framework, the language, or the hosting. They're the architecture choices made in the first two weeks that nobody thinks to challenge.
How I Reference This Project
When I discuss past work, I share process decisions, architecture approach, and timelines. Never the product itself. Never the client.
I don't share what clients build. I don't name them. When I reference this project, I talk about how we structured the build, what regulatory decisions mattered, and why the timeline worked.
Your idea stays yours.
Building in Healthcare or a Regulated Industry?
I run free 30-minute Build Plan sessions for founders who want a second opinion on their technical architecture before they commit.
You share your product category and the tech you've picked. No product details needed. No IP shared.
You walk away with a 1-page decision doc: what's solid, what's risky, and a realistic timeline. Plus a build plan outline with phases. Two documents. Marked confidential. Yours to keep whether we work together or not.
Contact me
Winner Innovative Product of the Year, Isle of Man | ISO & GDPR-Compliant for UK
Skills and deliverables
- Web Development
- DevOps
- Python
- React
- Node.js
Case Study: Carbon Accounting & Energy Management Platform
Sustainability / CleanTech • ISO-Compliant GHG Reporting • Scope 1, 2 & 3
Confidentiality Notice: This case study discusses process decisions, architecture approach, and timelines only. No product features, user flows, or client-identifying details are disclosed. The client's intellectual property remains fully protected.
Industry Sustainability / CleanTech
Compliance Framework ISO-Compliant GHG Reporting
Platform Type Carbon Accounting & Energy Management
Timeline 2+ years (scope expanded with regulatory jurisdictions)
Tech Stack Node.js · React · TypeScript
Key Integrations IoT Energy Sensors · ETL Pipelines · AI Analytics
Infrastructure Self-Managed VPS · Docker · Database Replicas
Recognition Won Product of the Year in its category
Key Result 80% faster queries · 60% IoT reliability improvement
How This Project Started
The founder came to us with a clear vision: build a platform that gives businesses real-time, data-driven insights to reduce energy costs, track emissions, and simplify compliance reporting. They had evaluated off-the-shelf carbon accounting tools and found them costly, rigid, and fundamentally limited in how they handled data.
The core problem wasn't features, every carbon reporting tool has dashboards and charts. The problem was data architecture. Existing tools were built for Scope 1 and 2 emissions data, which is structured, predictable, and clean. The founder's ambition went further: Scope 3 supply chain data. And Scope 3 breaks everything.
Different suppliers report in different formats. Different time periods. Different levels of completeness. Some don't report at all and you need to estimate. Most sustainability platforms bolt Scope 3 on later. It never works because the underlying database schema can't handle the variability.
The founder needed someone who understood that the data model IS the product, and was willing to build the architecture for Scope 3 complexity from day one, not retrofit it later.
The Architecture Decision That Shaped Everything
This project followed a principle that applies to every data-intensive build: design the data model for where the product needs to be in 12 months, not where it is today.
When the founder came to us, the initial approach was a standard NoSQL database, flexible, fast to prototype, and the default choice for most early-stage platforms. For Scope 1 and 2 data, NoSQL works fine. The data is structured. The queries are predictable.
But we looked at the actual requirements: ISO-compliant GHG reporting across Scope 1, 2, and 3. Real-time energy data from IoT sensors. Supply chain emissions from dozens of sources in dozens of formats. Analytics dashboards that needed to aggregate and slice this data in real time.
NoSQL would have worked for the first 6 months. Then the query performance would have collapsed under the weight of Scope 3 variability and the analytical workload.
The recommendation: start with NoSQL for ingestion flexibility, then build ETL pipelines to move processed data into a columnar database for analytics and reporting.
This wasn't a simple database swap. It was a fundamental architectural decision about how data flows through the platform, raw, messy supply chain data comes in through a flexible ingestion layer, gets normalised into a consistent internal format, then lands in a columnar store optimised for the exact query patterns that ISO-compliant reporting demands.
That architectural choice, made early, before the analytical requirements became urgent, is the reason the platform handles data that competitors choke on.
The Scope 3 Problem (And Why Most Platforms Fail Here)
This is worth explaining in detail because it's the single biggest technical differentiator in sustainability tech.
Scope 1 emissions are direct, fuel burned, processes run. Clean data. Predictable format.
Scope 2 emissions are indirect from purchased energy. Still structured. Still manageable.
Scope 3 emissions are everything else in the supply chain. And this is where the data gets ugly.
A single company might have 50 suppliers across 12 countries. Each supplier reports emissions data differently, different formats, different time periods, different levels of granularity, different completeness. Some suppliers provide detailed breakdowns. Some provide a single annual number. Some provide nothing and you need to estimate using industry averages and allocation methods.
Most sustainability platforms handle Scope 1 and 2 beautifully, then fall apart at Scope 3. Why? Because they designed the data model for clean, structured emissions data. When messy, incomplete, multi-format supply chain data arrives, the schema can't handle the variability. The platform either rejects the data, requires manual cleanup for every import, or produces inaccurate reports.
We designed the data model for Scope 3 complexity from day one. A flexible ingestion layer that normalises messy supply chain data, regardless of format, frequency, or completeness, into a consistent internal representation. The schema expects the mess. It's architected for variability, not perfection.
This is why the platform won Product of the Year. Not because of features. Because the data architecture underneath handles real-world supply chain data that competitors can't.
What We Built
The platform evolved through three distinct phases - PoC, MVP, and final production build - each adding architectural depth while maintaining the data model integrity established in phase one.
Phase 1: Custom Platform & Data Architecture
We migrated away from off-the-shelf products entirely. The existing tools were costing more in licensing and workarounds than a purpose-built platform would cost to develop. More critically, they imposed data structures that couldn't handle the Scope 3 requirements.
The custom build gave the founder complete control over customisation, security, and most importantly the data model. Every table, every relationship, every query path was designed for the specific analytical and compliance requirements of carbon accounting, not adapted from a generic SaaS template.
Phase 2: IoT Integration & Real-Time Energy Monitoring
We selected, tested, and integrated energy IoT sensors for real-time consumption tracking and system performance monitoring. This wasn't plug-and-play, each sensor type produces data in different formats at different frequencies, and the platform needed to ingest all of it reliably.
The IoT integration improved system reliability by 60% and enabled real-time fault detection. When a sensor reports anomalous readings, the platform catches it immediately rather than discovering the data quality issue weeks later during reporting.
The key architectural decision: treating IoT data ingestion as the same pattern as Scope 3 supply chain ingestion. Variable formats, variable frequencies, variable reliability, normalised through the same flexible ingestion layer. One pattern, applied consistently.
Phase 3: Advanced Analytics & ETL Pipelines
This is where the early data model decisions paid off dramatically.
We implemented ETL pipelines to move processed data from NoSQL to the columnar database. The result: 80% reduction in query times for the analytical dashboards and ISO-compliant GHG reporting.
The ISO-compliant dashboards now produce accurate emissions tracking across all three scopes, in real time, not as a monthly batch process. Businesses using the platform can pull Scope 1, 2, and 3 reports at any time and know the data is current, accurate, and audit-ready.
If we'd built the analytics on top of the original NoSQL store, the typical approach, the platform would have hit a performance wall as data volume grew. The ETL pipeline architecture means the ingestion layer and analytics layer are decoupled. Each scales independently. Each is optimised for its specific workload.
Infrastructure & DevOps
The platform runs on self-managed VPS architecture - the same infrastructure philosophy as our healthcare builds, chosen because energy data and emissions reporting require full control over data handling.
The deployment architecture:
Docker-based containerisation - isolated services, clean deployment pipeline, consistent environments from development to production.
Database replicas - for reliability and performance. The analytical queries don't compete with ingestion workloads.
Automated backups - with verified restoration procedures. Compliance data doesn't get a second chance.
CI/CD pipelines - introduced alongside monitoring systems and agile workflows. The initial PoC phase was delivered in six weeks; the ongoing build maintained deployment discipline throughout the 2+ year evolution.
Monitoring and logging - for reliability and audit compliance.
The infrastructure is future-ready, designed to support rapid growth in both data volume and user base without architectural rework.
Why This Timeline Was 2+ Years (And Why That's a Feature, Not a Bug)
The healthcare platform we built took 8 months. This one took 2+ years. Same budget class ($40K vs $50K). Same architectural discipline. Completely different timeline. Why?
Because compliance scope expanded with each regulatory jurisdiction added.
The initial build - PoC to production - moved fast. But carbon accounting regulations are not static. New jurisdictions adopt different reporting standards. ESG requirements evolve. The platform needed to support each new jurisdiction without rebuilding the foundation.
This is where the early architecture decision proved its value. The data model designed in month 1 is still the data model running in production today. No rebuilds. No rework. The foundation held through 2+ years and multiple jurisdiction expansions.
The longer timeline wasn't scope creep. It was scope expansion on a foundation built to handle it. That's the difference between a platform that grows and a platform that breaks.
Results & Impact
80% faster queries for real-time analytics and reporting. The ETL pipeline architecture means ISO-compliant GHG dashboards return results in seconds, not minutes.
60% reliability improvement in IoT-based energy monitoring. Real-time fault detection catches data quality issues immediately rather than during monthly reporting cycles.
Product of the Year in its category. Not because of features, because of the data architecture underneath. The platform handles messy, incomplete, multi-format supply chain data that competitors reject or mishandle.
Significant cost savings from moving away from rigid off-the-shelf tools. The custom platform costs less to operate than the licensing fees for the tools it replaced, while handling data complexity those tools couldn't touch.
Future-ready infrastructure to support rapid data and user growth. The decoupled ingestion and analytics layers scale independently. New data sources, new jurisdictions, new reporting requirements, the architecture absorbs them without rework.
The Pattern: Data Model Is the Product
In sustainability tech, the invisible architecture is the competitive advantage. Users don't see the data model. They don't see the ingestion layer. They don't see the ETL pipelines.
They see that the platform works when others don't. They see that Scope 3 reports are accurate when competitors produce garbage. They see that a new supplier's data integrates in hours, not weeks.
That's not a feature. That's a data architecture decision made before the first line of feature code was written.
The most expensive decision in a sustainability platform happens in week one, the data model. If it's designed for Scope 1–2 simplicity, you'll rebuild when Scope 3 arrives. If it's designed for Scope 3 complexity from day one, the foundation holds for years.
This platform was designed for Scope 3 from day one. The foundation held. The product won an award. And the data model designed in month 1 is still the data model in production.
How I Reference This Project
When I discuss past work, I share process decisions, architecture approach, and timelines. Never the product itself. Never the client.
I don't share what clients build. I don't name them. When I reference this project, I talk about how we structured the data architecture, what decisions mattered for Scope 3 complexity, and why the foundation survived 2+ years of expansion.
Your idea stays yours.
Building in Sustainability or a Data-Intensive Industry?
I run free 30-minute Build Plan sessions for founders who want a second opinion on their data architecture before they commit.
You share your product category and the data requirements you're dealing with. No product details needed. No IP shared.
You walk away with a 1-page decision doc: what's solid in your approach, what's risky, and a realistic timeline. Plus a build plan outline with phases. Two documents. Marked confidential. Yours to keep whether we work together or not.
Contact me
South African Tax & Audit Compliance
Skills and deliverables
- React
- Node.js
- ERP Software
Case Study: Internal Accounting Operations Platform
FinTech / RegTech • Audit-Ready • Chartered Accounting Firm (South Africa)
Confidentiality Notice: This case study discusses process decisions, architecture approach, and timelines only. No product features, user flows, or client-identifying details are disclosed. The client's intellectual property remains fully protected.
Industry Financial Services / Chartered Accounting
Regulatory Environment South African Tax & Audit Compliance
Platform Type Internal Operations & Workflow Automation
Project Scope Legacy Stabilisation → Production-Grade Rebuild
Tech Stack React (Frontend) · Node.js (Backend)
Infrastructure Heroku (Dev / QA / Production) · CI/CD Pipelines
Key Architecture Audit Trail · RBAC · Multi-Role Approval Workflows
Key Results 50% faster load times · 60% faster API response · Full audit readiness
How This Project Started
The firm came to us with a problem that wasn't about building something new. It was about saving something that was already breaking.
They had an existing internal platform - built to manage tax workflows, submissions, approvals, and compliance tracking. In theory, it replaced the spreadsheets and manual processes the firm had been running on for years. In practice, the system was failing under real operational pressure.
Dashboards timed out during peak filing periods. Critical modules crashed under load. API responses were slow enough that staff reverted to manual workarounds. The audit trail, the single most important feature for a regulated accounting firm, had gaps. Partners couldn't trust the data they were seeing.
The firm didn't need a new system. They needed someone who could look at the existing codebase, understand what was architecturally broken versus what was just poorly optimised, and stabilise it before the next filing deadline. Then rebuild the parts that couldn't be saved.
This is the pattern with internal tools: the pain isn't "we need software." The pain is "our software is supposed to work and it doesn't, and the next deadline is in 6 weeks."
The Architecture Decision That Shaped Everything
Before writing a single line of new code, we did something most developers skip: we diagnosed the existing system.
The instinct - for most developers and most firms, is to propose a full rebuild. Scrap it, start fresh, build it properly. It feels clean. It sounds professional. And for a firm with active tax deadlines, compliance obligations, and staff who've already learned the existing system, it's the wrong answer.
The right answer was a two-phase approach:
Phase 1: Stabilise what exists. Find the architectural bottlenecks that are causing timeouts and failures. Fix the queries that are killing performance. Shore up the modules that crash under load. Get the system reliable enough to survive the next filing period.
Phase 2: Rebuild the parts that can't be fixed. Some problems aren't performance issues, they're architectural issues. An audit trail with gaps isn't a slow query. It's a data model that doesn't capture what it needs to capture. A workflow that allows unauthorised status changes isn't a UI bug. It's a permissions architecture that was never properly designed.
This diagnosis-first approach saved the firm months. Instead of a 6–12 month rebuild with zero platform availability during the transition, they got stability within weeks and a rolling upgrade path that never took the system offline during a filing deadline.
Phase 1: Legacy Stabilisation
The first priority was performance. A system that's too slow to use is a system that doesn't exist, staff will revert to spreadsheets and manual processes, and every hour they spend on workarounds is an hour the firm is paying for twice.
50% reduction in application load time. The dashboard loads that were timing out during peak periods? Long-running queries hitting the database without proper indexing, pulling entire datasets when the UI only needed aggregated summaries. We restructured the query patterns to match what the interface actually needed.
60% improvement in API and server response times. The backend was processing requests sequentially where it should have been handling them concurrently. Endpoints that staff hit dozens of times per day, checking submission status, pulling client records, running reports, were the worst offenders. We optimised the critical path first.
Stabilised critical modules experiencing timeouts under load. During peak filing periods, multiple staff members hitting the same modules simultaneously would cause cascading failures. The architecture wasn't designed for concurrent access at the volumes a real accounting firm generates during deadline weeks.
This wasn't glamorous work. No new features. No redesigned UI. Just making the existing system do what it was supposed to do: work reliably when the firm needed it most.
But here's why it mattered strategically: every week the stabilised system performed reliably, the firm's trust in the platform grew. Partners who had stopped using certain modules started using them again. Staff who had built spreadsheet workarounds began retiring them. The system started becoming the single source of truth it was always supposed to be.
That trust was essential for Phase 2, because Phase 2 required the firm to rely on the platform for things they'd never trusted it with before.
Phase 2: Architecture Rebuild
With the system stable, we could address the structural problems that no amount of optimisation would fix.
Audit Trail Architecture
For a chartered accounting firm, the audit trail isn't a feature. It's the foundation. If a regulator asks "can you prove this record was never improperly modified?" the answer needs to be immediate and definitive.
The existing system had gaps. Records could be modified without a clear history of who changed what and when. Deletions happened without mandatory reason capture. Multi-month audit reports were unreliable because the underlying data didn't track state changes consistently.
We implemented transaction-level audit trails - every change to every record is logged with who made it, when, and what the previous state was. Deletions require a mandatory reason capture. Historical reports can reconstruct the exact state of any record at any point in time.
This is the same architectural principle we apply to fintech builds: the audit trail is the data model, not a feature bolted on top. For regulated financial software, this is the difference between passing an audit and rebuilding 60% of your platform when the auditor arrives.
Role-Based Access Control & Approval Workflows
The firm operates with a clear hierarchy: Clerks prepare work. Partners review and approve. Super Partners have final authority on critical submissions. The original system implemented this loosely, roles existed but permissions were inconsistent, and the system allowed status changes that should have required approval.
We rebuilt the permissions architecture with granular approval flows: Clerk → Partner → Super Partner, with permissions varying by role and tax stage. Unauthorised status changes are prevented at the architecture level, not the UI level. "Rework required" loops are controlled, a Partner can send work back to a Clerk with specific notes, and the system tracks every round-trip.
This reduced approval ambiguity and operational risk. When a Partner approves a submission, it means something, the system enforces that the required review actually happened.
Tax Automation Engine
The firm had recurring monthly processes that were being done manually, creating tax entries, verifying they were created correctly, and flagging failures. Manual processes that run monthly are manual processes that fail at the worst possible time.
We automated monthly tax entry creation with scheduled jobs that run reliably without intervention. But automation alone isn't enough, you need to know when it fails. We built failure detection reports that confirm successful execution and flag any entries that weren't created correctly.
The audit-style reports show client name, tax type, entry creation status, and timestamps. Partners can verify at a glance that the automated process completed correctly. This directly addressed the trust concern: "If we automate this, how do we know it actually worked?"
The answer: the same way you'd verify it manually, except the system does it automatically and flags exceptions instead of requiring you to check every entry.
What the Firm Got
Beyond the core audit trail and workflow architecture, the platform now handles the full operational lifecycle:
Client & entity management with centralised records, history tracking, and controlled field updates based on business rules. No accidental data loss. No untracked changes.
Quality control dashboards with performance statistics, partner and super checker reports, and filters by assigned clerk and date range. Management can see who's doing what, how fast, and where bottlenecks are forming.
Correspondence and objection management with automatic linking, reducing the manual errors that come from staff manually connecting documents to cases.
Reporting with date-type aware filters for accurate monthly reporting, consolidated deadline views, and downloadable statistics. The reports match real workflow, not a generic template that "sort of" represents what the firm actually does.
Optimised notifications - reduced redundant emails, improved subject-line clarity with financial year context, and ensured notifications match real workflow actions. When staff get a notification, it means something happened that requires their attention, not that the system is generating noise.
Infrastructure & Deployment
We designed and managed a three-environment Heroku deployment: Development, QA, and Production. CI/CD pipelines for safe deployments. The ability to push live production fixes during critical deadlines without taking the system offline.
For an accounting firm, deployment discipline isn't optional. A bad deploy during filing week doesn't just create a bug, it creates a compliance risk. Every deployment goes through QA. Every production fix is tested before it touches live data.
We handled live outages and dependency conflicts with minimal disruption, including during peak operational periods when the system couldn't afford downtime.
The Bridge Role
One of the most valuable - and least visible, parts of this project was translating between accounting partners and the development process.
Accounting partners think in terms of submissions, deadlines, audit requirements, and regulatory obligations. Developers think in terms of data models, API endpoints, and deployment pipelines. The gap between these two worlds is where most internal tool projects fail.
We acted as the bridge: taking accounting-domain requirements and translating them into technical architecture that actually serves the workflow. When a partner says "I need to know that no one changed this record after I approved it," that's an audit trail architecture requirement. When a clerk says "I can't find the correspondence for this objection," that's a data relationship problem.
This translation work, understanding the domain well enough to build for it, not just code for it, is why the platform works under real operational pressure instead of just looking good in a demo.
Results & Impact
50% faster load times - dashboards that timed out during peak periods now respond instantly.
60% faster API responses - the endpoints staff hit hundreds of times per day are optimised for their actual usage patterns.
Full audit readiness - transaction-level audit trails, mandatory reason capture for deletions, and historical reports that can reconstruct any record's state at any point in time.
Faster approvals with fewer blockers - granular RBAC means the right people approve at the right stage, with no ambiguity about who authorised what.
Automated tax workflows - monthly processes that were error-prone and manual now run reliably with automatic failure detection.
Clear visibility into errors and corrective actions - when something goes wrong, the system shows exactly what happened, when, and what needs to be fixed.
A stable platform that evolves with regulatory needs - the architecture is designed to absorb new compliance requirements without rebuilding the foundation.
The Pattern: Internal Tools Are Architecture Problems
Every B2B company has a process that eats 10–20 hours per week in manual work. Everyone knows it should be automated. Nobody does it because generic SaaS tools don't fit the exact workflow.
This project proves the pattern: internal tools succeed when the architecture matches the actual workflow, not when the workflow is forced to match the tool.
The platform does specific things - tax workflows, approvals, audit trails, compliance tracking. But it does those things precisely right for how this firm actually operates. Not a generic approximation. Not a template adapted from another industry. A system built for the real process, including the exceptions, edge cases, and regulatory requirements that generic tools handle poorly or not at all.
The scoping conversation - understanding exactly how the team works, what inputs come from where, what exceptions happen, what the reporting needs to look like, is where the real value lives. The code is just implementing what we mapped.
How I Reference This Project
When I discuss past work, I share process decisions, architecture approach, and timelines. Never the product itself. Never the client.
I don't share what clients build. I don't name them. When I reference this project, I talk about the audit trail architecture, the stabilisation approach, and why diagnosis before rebuild saved months. Your idea stays yours.
Running Critical Operations on Spreadsheets or Fragile Internal Tools?
I run free 30-minute Build Plan sessions for firms and founders who need a second opinion on their internal tool architecture.
You describe the process that's eating your team's time. No sensitive data needed. I map what a custom tool, or a rebuilt tool, looks like: architecture approach, realistic timeline, and cost range. 1-page scope document. Yours to keep.
Contact me
Working with Amol for a three years and really happy with all the stuff we did together. The communication back and forth is really smooth and their actual knowledge is in-depth. Great to work with a partner who understands requirement and knows how to translate this into code.
Amol is a trusted and reliable employee, he always stives to deliver the best results and is very output driven.