Security
Last updated: April 18, 2026
Our approach
BioCanvas stores your scientific figures and supports collaboration across workspaces. We take a layered approach: encryption everywhere data is in flight or at rest, narrow access paths, short-lived credentials, and third-party infrastructure we trust (AWS, Vercel, Stripe) rather than rolling our own where we don't need to.
We're a small, focused team. This page is meant to be honest about what we have in place today and what we plan to add before we take on customers with stricter compliance needs.
Data protection
- Encryption in transit: TLS 1.2+ enforced on every public endpoint. HSTS preload is on.
- Encryption at rest: AWS-managed encryption (AES-256) on RDS, EBS volumes, and S3 buckets that hold figures.
- Secret management: Application secrets are stored encrypted (SOPS + AWS KMS), never committed to git in plaintext. KMS key rotation is enabled.
- Passwords: Hashed with bcrypt before storage. We never log passwords or access tokens.
Access control
- Workspace isolation: Figures are private to their workspace by default. Permissions are enforced at the API layer using a fine-grained authorization system.
- Pod-level identity: Backend services use short-lived AWS credentials via IRSA (IAM Roles for Service Accounts). No long-lived access keys in containers.
- Production access: Restricted to authorized personnel. Admin operations are audited at the Kubernetes + AWS layer.
Network
- Production runs in a private VPC (AWS eu-central-1). Backend pods are not internet-exposed; only the shared ALB is.
- AWS WAF sits in front of the ALB with rate-limiting rules.
- The RDS instance sits in private subnets, reachable only from within the VPC.
Subprocessors
BioCanvas shares data with these third-party services, only as needed to operate the platform. For details on cross-border transfers, see our privacy policy.
| Service | Purpose | Location |
|---|---|---|
| AWS | Cloud infrastructure, compute, data storage | EU (eu-central-1) primary, US failover |
| Stripe | Payment processing for Pro + Enterprise plans | United States / EU |
| Vercel | Static site hosting for the landing page | United States (global edge) |
| Porkbun | Domain registrar for biocanvas.app | United States |
| Anthropic | LLM inference for the optional AI figure-generation agent | United States |
| OpenAI | LLM inference for the optional AI figure-generation agent | United States |
| Telegram | Internal operational notifications from the contact and waitlist forms | Global |
AI figure-generation agent
The AI agent is an optional Pro feature, off by default. When you use it, the prompt and figure context required to generate a scene are sent to third-party LLM providers (Anthropic, OpenAI) and then returned as a draft to your workspace. We recommend the following precautions:
- Do not include secrets, credentials, patient-identifying information, or any confidential data in prompts.
- Treat AI-generated content as a draft — review it before sharing or publishing.
Assessments
Third-party security assessments (penetration testing, SOC 2) are on the roadmap. We don't have published reports yet. If you're evaluating BioCanvas for institutional use and need formal assurances, get in touch and we'll share what we have and where we're headed.
Disclosure
Found a vulnerability? Please disclose it privately via our contact page. Include steps to reproduce, impact, and any suggested mitigations. We'll acknowledge within one business day and keep you informed through resolution.
Please do not perform denial-of-service testing, spam-bomb production systems, or attempt to access other customers' data.