Multi-Tenant Vitals Architecture for Telehealth SaaS Platforms
How telehealth SaaS platforms design multi-tenant architecture for vitals data isolation, tenant-specific processing, and scalable rPPG integration.

Multi-tenant vitals architecture is the problem that telehealth SaaS platforms eventually run into, usually sooner than they expect. The first customer is easy. You build the vitals pipeline, hook up the rPPG SDK, ship it. Then the second health system signs on, and suddenly you are asking questions about data isolation, tenant-specific processing configs, and whether your database schema can actually keep Patient A's blood pressure readings from leaking into Hospital B's dashboard.
This is not a theoretical concern. A 2025 analysis by ComplyDog found that multi-tenant SaaS privacy failures most often stem from inadequate row-level security policies, not dramatic breaches, but quiet misconfigurations where tenant boundaries were assumed rather than enforced. In healthcare, where HIPAA violations carry penalties up to $2.13 million per category per year (as adjusted by the HHS in 2024), those assumptions get expensive fast.
"Row-level security policies enforce tenant isolation at the database query level, ensuring that every data request is automatically filtered based on the authenticated tenant. Physical separation is not required for HIPAA compliance when logical controls are properly implemented." — Kevin Yun, ComplyDog, September 2025
How multi-tenant telehealth vitals architecture differs from standard SaaS
Most multi-tenant SaaS guides talk about user data, configuration preferences, and billing. Telehealth vitals add a layer that standard B2B software never touches: continuous physiological signal data generated in real time during clinical encounters.
When a patient sits in front of their camera during a video visit and an rPPG algorithm extracts heart rate, respiratory rate, and SpO2 from the video frames, that data has to flow through a pipeline that handles ingestion, processing, storage, and retrieval. Each of those stages needs tenant awareness. And the requirements vary by tenant — one health system may want 30-second measurement windows while another requires 60-second captures with different signal quality thresholds.
Here is where vitals-specific multi-tenancy diverges from the standard playbook:
| Concern | Standard SaaS multi-tenancy | Telehealth vitals multi-tenancy | |---|---|---| | Data volume per tenant | Moderate — user records, configs | High — continuous time-series physiological data | | Real-time requirements | Low — API responses in seconds | High — vitals processing during live video visits | | Regulatory burden | SOC 2, possibly GDPR | HIPAA, state privacy laws, potentially FDA for clinical use | | Data sensitivity | PII, financial data | PHI including biometric and physiological measurements | | Processing variability | Minimal — same logic per tenant | Significant — different measurement configs, thresholds, alert rules | | Retention policies | Standard — 30 days to 7 years | Tenant-specific — varies by state law and clinical use case | | Integration targets | CRMs, email, analytics | EHR systems via HL7 FHIR, clinical decision support tools |
The real-time processing requirement is what separates this from most multi-tenant discussions. When a patient is mid-consultation and the rPPG pipeline is extracting vitals from their video feed, latency matters clinically. A 2024 study published by researchers at the University of Sulaimani (Ali S. Salim and Abdul Sattar M. Khidhir) found that rPPG algorithms running on standard RGB video can achieve mean absolute error below 2 BPM for heart rate under controlled conditions. But achieving that accuracy requires stable frame delivery and consistent processing — both of which get harder when multiple tenants share the same compute infrastructure.
Three isolation models and when each makes sense
The telehealth industry has largely settled on three architectural patterns for multi-tenant vitals platforms. Each carries different trade-offs in cost, complexity, and compliance posture.
Shared database with row-level security
This is the most common starting point. All tenants share the same database, and every table includes a tenant_id column. Row-level security (RLS) policies at the database level ensure that queries only return data belonging to the authenticated tenant. PostgreSQL has native RLS support, and managed services like AWS RDS and Google Cloud SQL support it out of the box.
For vitals data, this means the time-series measurements, the session metadata, the calculated vital sign values — all sit in the same physical tables, separated by tenant identifiers. The advantage is operational simplicity. One schema to migrate, one backup strategy, one monitoring setup.
The risk is that a single misconfigured query or a developer who forgets to pass the tenant context can expose cross-tenant data. Roving Health's 2025 analysis of healthcare SaaS architecture noted that platforms using this model should implement tenant context injection at the ORM or middleware layer, not at the application query level, to reduce the surface area for human error.
Schema-per-tenant
Each tenant gets its own database schema within a shared database instance. The vitals tables, configuration tables, and integration mappings all live in a tenant-specific schema. Connection pooling routes each request to the correct schema based on authentication.
This provides stronger isolation than RLS alone because a stray query literally cannot reference another tenant's tables without explicitly switching schemas. The cost trade-off is more complex migration tooling — when you ship a schema change, it has to be applied across every tenant schema.
For telehealth platforms with 10 to 50 health system clients, this model often hits the right balance. A 2025 guide from Ariel Softwares described this as "best for mid-stage SaaS products with moderate tenant counts and clients that need data-level separation for contractual or audit reasons."
Database-per-tenant (silo model)
Each tenant gets a fully separate database instance. Complete physical isolation. No shared infrastructure at the data layer. This is what large health systems and government contracts typically require.
The operational cost is real. Each database needs its own backups, monitoring, scaling, and maintenance. But for tenants processing thousands of vitals measurements daily across hundreds of providers, the performance isolation alone can justify it. No noisy neighbor problems. No shared I/O contention during peak telehealth hours.
| Isolation model | Cost efficiency | Isolation strength | Operational complexity | Best for | |---|---|---|---|---| | Shared DB + RLS | High | Moderate | Low | Early-stage, cost-sensitive | | Schema-per-tenant | Medium | Good | Medium | Mid-market, 10-50 tenants | | Database-per-tenant | Low | Complete | High | Enterprise, government, large health systems |
Tenant-specific vitals processing pipelines
Data isolation is half the problem. The other half is processing isolation — making sure that Tenant A's rPPG configuration does not bleed into Tenant B's measurement pipeline.
In practice, different health systems want different things from the same vitals SDK. One cardiology-focused system might want 60-second measurement windows with HRV analysis enabled. A primary care network might want 30-second quick scans capturing only heart rate and blood pressure trends. A behavioral health platform might only care about stress indicators derived from HRV metrics.
The architecture that handles this well separates the vitals processing into a configurable pipeline where tenant-specific parameters are loaded at session initialization. The patient starts a video visit, the platform authenticates the session and identifies the tenant, and the vitals pipeline loads that tenant's measurement profile before the first frame is processed.
This is where microservices architecture pays off for telehealth platforms. Arpatech's 2025 analysis of scalable RPM and telehealth systems recommended segmenting services into device ingestion, data storage, analytics/AI, notification engines, and user access layers. For multi-tenant vitals, the analytics/AI layer is where tenant-specific processing configs live. Each measurement session pulls its configuration from a tenant-scoped config store, runs the rPPG processing with those parameters, and writes results back to the tenant's data partition.
Handling compute isolation
The question of whether to run tenant vitals processing on shared or dedicated compute depends on volume and latency requirements. For client-side rPPG processing (where the SDK runs on the patient's device), this is largely a non-issue — each patient's browser or app handles its own computation. The tenant separation happens at the data ingestion layer.
For server-side processing, where video frames are sent to backend services for rPPG analysis (a topic covered in more detail in our WebRTC and rPPG architecture guide), compute isolation becomes more relevant. Kubernetes namespaces with resource quotas can provide soft isolation. Dedicated node pools per tenant provide hard isolation. The choice depends on the SLA the health system is paying for.
FHIR integration across tenant boundaries
Every health system has its own EHR. Epic, Cerner (now Oracle Health), Meditech, Athenahealth — and each instance has its own FHIR endpoint, authentication flow, and resource mapping. Multi-tenant vitals platforms need to maintain separate FHIR integration configurations per tenant, each with its own OAuth credentials, endpoint URLs, and field mappings.
The HL7 FHIR Vital Signs profile (based on the Observation resource) provides a standard structure for representing heart rate, blood pressure, respiratory rate, and oxygen saturation. But "standard" in FHIR-land still means tenant-specific customization. One health system might want vitals pushed as individual Observations. Another might want them bundled into a DiagnosticReport. A third might want real-time streaming via FHIR Subscriptions.
A well-designed multi-tenant FHIR integration layer maintains per-tenant configuration that specifies:
- The FHIR server endpoint and authentication method
- Which vital sign types to push (some tenants may only want heart rate and SpO2)
- The mapping between internal measurement IDs and FHIR coding systems (LOINC codes)
- Retry and error handling policies
- Whether to use FHIR R4 or R5 resources
Current research and evidence
The intersection of multi-tenant architecture and real-time physiological sensing is still a relatively young field. Most published research focuses on either multi-tenant SaaS patterns in general or rPPG signal processing in isolation. The combination is largely documented in industry practice rather than academic literature.
That said, several research threads are directly relevant. The University of Bonn's 2025 work published in Frontiers in Digital Health demonstrated that the POS (plane-orthogonal-to-skin) method for rPPG produces reliable signals even from H.264 compressed video at standard telehealth bitrates. This matters for multi-tenant platforms because it means the video compression settings can remain consistent across tenants without degrading measurement quality — one fewer tenant-specific variable to manage.
On the architecture side, a 2025 report from Binadox on multi-tenant vs. single-tenant SaaS found that many regulatory frameworks, including HIPAA, accept multi-tenant solutions when proper controls are implemented. The key phrase is "proper controls" — for vitals data, that means encryption at rest and in transit, audit logging per tenant, and access controls that prevent cross-tenant data exposure at every layer of the stack.
Dr. Daniel McDuff's body of work on camera-based physiological sensing, including his research at Microsoft Research and later at Google, has consistently argued that the video stream infrastructure for telehealth already contains the optical data needed for contactless vitals extraction. The multi-tenant challenge is not about the signal processing itself but about wrapping that processing in the right isolation, configuration, and compliance boundaries.
The future of multi-tenant telehealth vitals
The direction the industry is heading is clear enough: more tenants, more vital sign types, and more granular per-tenant customization. As telehealth platforms move beyond heart rate and SpO2 toward blood pressure trending, stress indices, and respiratory pattern analysis — a shift we explored in our piece on how video-based vitals improve clinical outcomes —, the processing pipelines get more complex and the per-tenant configuration surface grows.
Edge computing may shift some of the multi-tenancy concerns. If rPPG processing happens entirely on the patient's device, the backend multi-tenancy problem shrinks to data storage and integration rather than compute isolation. WebAssembly-based rPPG SDKs running in the browser are already feasible, and as device processing power continues to increase, the argument for server-side vitals processing weakens for most use cases.
The platforms that get multi-tenant vitals architecture right early will have a structural advantage. Building tenant isolation as an afterthought — after the second or third health system signs on — is significantly more painful and risky than designing for it from the start.
Frequently asked questions
Is multi-tenant architecture HIPAA compliant for vital signs data?
Yes, when implemented with proper controls. HIPAA does not require physical data separation. Row-level security, encryption, access controls, and audit logging can satisfy HIPAA requirements in a multi-tenant setup. The key is demonstrating that one tenant's PHI cannot be accessed by another tenant, whether through technical controls or administrative safeguards.
How do telehealth platforms handle different EHR integrations per tenant?
Each tenant gets its own FHIR integration configuration, including endpoint URLs, OAuth credentials, resource mappings, and push preferences. The platform maintains a per-tenant integration registry that the vitals pipeline consults after processing each measurement session.
Should rPPG processing happen client-side or server-side in a multi-tenant platform?
Client-side processing simplifies multi-tenancy because the compute isolation problem disappears — each patient's device handles its own processing. Server-side processing offers more control over algorithm versions and quality assurance but introduces compute isolation challenges. Most platforms are moving toward client-side processing with server-side validation.
What database model works best for telehealth vitals multi-tenancy?
It depends on scale and client requirements. Shared database with RLS works for early-stage platforms. Schema-per-tenant suits mid-market companies with 10-50 health system clients. Database-per-tenant is appropriate for enterprise deals where the client requires complete physical isolation, which is common in government and large health system contracts.
Platforms like Circadify are building rPPG SDKs designed for multi-tenant telehealth integration, with tenant-aware processing pipelines and FHIR-compatible output from the start. For telehealth platforms evaluating contactless vitals integration, designing the multi-tenant architecture before the first SDK call is the decision that pays off down the line.
