Why Archevi Cannot Read Your Data (And Why That Matters)
The encryption backdoor conversation
In early 2026, the UK government ordered Apple to build a backdoor into iCloud encryption -- allowing authorities to access any user's data on demand. Apple complied by disabling Advanced Data Protection for UK users rather than building the backdoor.
When a company holds a master key to your data, governments can demand that key. This is not a theoretical concern -- it happened to Apple, the largest technology company in the world.
At Archevi, we designed the architecture so that the question never arises. Here is exactly how.
Per-tenant row-level security
Most SaaS products use a single shared database. Your data and every other user's data sit in the same tables, separated by a user ID column. This is called multi-tenant shared infrastructure. It is cheaper to operate and easier to build.
Each Archevi family is isolated by PostgreSQL row-level security (RLS) policies enforced at the database level. Every query runs through a dedicated application role that can only access data belonging to your tenant -- no cross-tenant query paths exist, and this is enforced by the database engine itself, not application code.
What this means in practice
This architecture is more expensive to operate. It requires more infrastructure, more monitoring, and more careful orchestration. We accept that cost because privacy is not a feature of Archevi -- it is the product.
Boundary anonymization: how AI works without seeing your data
Archevi uses AI to answer questions about your documents. A natural question follows: if the AI can read my documents, does it know my personal information?
The answer is no, because of a technique we call boundary anonymization.
How it works
You upload a document -- say, your home insurance policy.
Content extraction -- the system extracts the text content from the document within your tenant's secured data store.
Anonymization -- before the text leaves your database to reach any AI model, personally identifiable information is stripped. Names become PERSON_1. Addresses become ADDRESS_1. Policy numbers become ID_1. Dates of birth become DATE_1.
AI processing -- the AI model receives anonymized text. It can understand that PERSON_1 is the beneficiary of a life insurance policy with coverage of $500,000, but it does not know that PERSON_1 is your spouse.
Re-identification -- when the answer is returned to you, the tokens are replaced with the real values. You see your spouse's name, not PERSON_1.
The AI model never learns who you are, where you live, or what your policy numbers are. It processes structure and meaning, not identity.
Your data never trains any AI model
This is a separate commitment from anonymization. Even if anonymization were not in place, we would still never use your data to train AI models.
Many AI products use customer data to improve their models. OpenAI's ChatGPT does this by default (you can opt out in settings). Google uses your Drive content to personalize services including advertising.
Archevi does not train on your data. Period. This is not a setting you need to find and toggle. It is a permanent architectural decision. Your documents exist to serve your family, not to improve our product.
Canadian data residency and PIPEDA
All Archevi data is stored on DigitalOcean's Toronto servers (TOR1 region). This is not just a hosting choice -- it is a jurisdictional choice.
- PIPEDA governs your data: Canada's Personal Information Protection and Electronic Documents Act sets strict rules about how your data is collected, used, and disclosed.
- Canadian courts required: Any government request to access your data must go through Canadian courts. No foreign government can compel Archevi to hand over your data without Canadian judicial oversight.
- No CLOUD Act exposure: The US CLOUD Act allows American authorities to compel US companies to hand over data regardless of server location. Archevi is a Canadian company storing data on Canadian servers. The CLOUD Act does not apply.
Why this matters right now
The Apple/UK situation is not an isolated incident. Governments around the world are pushing for increased access to encrypted data. AI companies are racing to collect more training data. The trend is toward less privacy, not more.
At the same time, families are uploading increasingly sensitive documents to digital platforms -- insurance policies, wills, medical records, tax returns. The gap between the sensitivity of the data and the privacy guarantees of most platforms is growing.
Archevi closes that gap. Not with vague reassurances or privacy policies written by lawyers to protect the company. With architecture. Per-tenant row-level security. Boundary anonymization. Canadian data residency. No training. No ads. No backdoors.
Transparency is the foundation of trust
We are publishing this explanation because you deserve to know exactly how your data is handled. Not in legal jargon. Not in marketing language. In plain technical terms that anyone can verify.
If you have questions about any of this, ask us. We will answer directly.
Sign up free and see what a privacy-first family document vault feels like. No credit card required.


