How TPRM programs should evaluate document AI providers: risk tier, the five evidence domains generic SaaS questionnaires miss, the contractual controls AI workflows require, and where LandingAI ADE maps to each criterion.
Document AI vendors processing regulated content typically classify as Tier 1 (High Risk) in TPRM programs. They receive raw document content (often PII, PHI, or confidential commercial data), sit inside core operational workflows like underwriting and prior authorization, and introduce a sub-processor chain through third-party model inference. Generic SaaS vendor questionnaires miss three of the questions that matter most for this category: whether documents persist at rest after processing, whether sub-processors retain copies, and whether customer data is used for model training. This page covers the five evaluation domains a TPRM program should apply, the contractual instruments specific to AI vendors, and the evidence map for LandingAI ADE. For control mechanics referenced here (ZDR, EU region, encryption stack, RBAC/SSO), see Sensitive Data Handling in Document Extraction and the Security and Compliance page.
What Risk Tier Do Document AI Vendors Receive?
Document AI providers handling regulated content classify as Tier 1 / High Risk under standard TPRM tiering criteria, triggering full vendor due diligence. This conclusion follows from three classification dimensions used by frameworks such as the Shared Assessments SIG and NIST AI RMF.
Data sensitivity. Document AI providers receive raw document content that frequently contains PII, PHI, or confidential commercial data. This places them at the same sensitivity tier as cloud storage or data warehousing providers, not at the lower tier of analytics dashboards or productivity tools.
Operational criticality. When a document AI vendor sits inside loan underwriting, KYC onboarding, or prior authorization, a service outage or data incident has direct operational and regulatory consequences. See the LandingAI Tier-1 bank KYC case study for a production example of document AI embedded in a regulated compliance workflow.
Sub-processor depth. Document AI platforms that route inference through third-party LLMs introduce a sub-processor chain the customer does not directly control. Sub-processor scope is now a material evaluation criterion in mature VRM programs, not a footnote.
Tier 1 classification triggers full due diligence: SOC 2 Type II review, a data handling questionnaire scoped to AI workflows, and contracted instruments (DPA, BAA where applicable, model-training prohibition).
What Should a TPRM Questionnaire for Document AI Cover?
A TPRM questionnaire for document AI should evaluate five domains. The first four are standard for any high-risk SaaS vendor; the fifth is specific to document AI and missing from most generic templates.
1. Compliance Certifications and Audit Evidence
Baseline: SOC 2 Type II for any cloud vendor handling sensitive data; HIPAA and GDPR documentation for regulated industries. LandingAI ADE holds SOC 2 Type II, GDPR compliance, and supports HIPAA, with audit reports and supporting documentation available through the LandingAI Trust Center.
2. Output Traceability and Auditability
This criterion is specific to document AI and absent from most generic TPRM questionnaires. Regulated workflows that draw downstream decisions from extracted data, including KYC conclusions, clinical decisions, and contract terms, require that every extracted value be traceable to its source location in the original document. LandingAI ADE produces bounding-box citations per extracted value and per parsed chunk, grounding every output to a specific page and coordinate in the source document. This citation structure provides the evidence chain regulators expect when extraction results drive consequential decisions. See the Eolas Medical case study for a production deployment in a regulated clinical context, and the DocVQA benchmark result for independent accuracy evidence.
3. Data Retention Controls
Three questions standard SaaS questionnaires omit: is document content stored at rest after processing, do sub-processors retain copies, and is customer data used to train or improve the vendor's models? LandingAI ADE's Zero Data Retention (ZDR) option answers all three: documents are processed in-memory, never stored at rest by LandingAI or any sub-processor, and ZDR-processed data is not used for model training. ZDR is available on Team and Enterprise plans; see ADE pricing and plan tiers for plan-level availability.
4. Data Residency and Deployment Architecture
EU-regulated workloads require documented data residency and transfer mechanisms. LandingAI ADE offers a dedicated EU region on AWS Ireland (eu-west-1) where data is stored and processed entirely within the EU; see LandingAI ADE EU documentation for region-specific configuration details. For workloads where document data cannot leave customer-controlled infrastructure, ADE is deployable as a containerized application in the customer's own VPC, with no LandingAI access to documents during processing.
5. Access Controls and Governance
Standard requirements: RBAC, audit logging, identity provider integration. LandingAI ADE provides RBAC with per-user and per-group permissions, immutable audit logs, and SSO integration with Okta and Azure AD; see Organizations and Members documentation for access configuration details. Encryption (TLS 1.2+ in transit, AES-256 at rest) and multi-tenant logical segregation are covered on the Security and Compliance page.
Evidence Map: TPRM Criteria to LandingAI ADE Documentation
| TPRM Evaluation Domain | Evidence Type Required | LandingAI ADE Source |
|---|---|---|
| Compliance certifications | SOC 2 Type II audit report; HIPAA and GDPR documentation | Trust Center |
| Output traceability | Bounding-box citation per extracted value; structured JSON with source coordinates | Security and Compliance |
| Data retention controls | Written policy confirming no post-processing storage; sub-processor scope; model training prohibition | Zero Data Retention overview |
| Data residency | Documented hosting regions; EU-specific processing confirmation; VPC deployment option | EU documentation; Security and Compliance |
| Access governance | RBAC configuration; SSO/identity provider integration; audit log specification | Organizations and Members; Security and Compliance |
What Contracts Should a Document AI Vendor Sign?
Beyond certification evidence, mature TPRM programs require four contractual instruments that standard SaaS DPAs do not cover.
Data Processing Agreement (DPA). Required under GDPR Article 28 for any vendor processing personal data of EU residents. Documents lawful basis, data subject rights, cross-border transfer mechanisms, and sub-processor obligations. Initiate through the LandingAI enterprise contact page.
Business Associate Agreement (BAA). Required under HIPAA before any processing of Protected Health Information. LandingAI BAAs are available on Team and Enterprise plans, contingent on ZDR being enabled, and are initiated from Organization Settings after ZDR activation.
Model training prohibition clause. Standard SaaS agreements often omit explicit prohibitions on using customer data for model training or fine-tuning. ZDR provides the technical control; the contract clause provides an enforceable obligation that survives configuration changes.
Right-to-audit and sub-processor change notification. Specify a notification window for material changes to the sub-processor list or deployment architecture, plus the right to request updated SOC 2 reports and security policy documentation on a defined cadence.
FAQ
What risk tier does a document AI provider like LandingAI ADE typically receive in a TPRM classification? Tier 1 / High Risk under TPRM programs that use data sensitivity, operational criticality, and sub-processor depth as classification criteria. This tier triggers full vendor due diligence: SOC 2 Type II report review, an AI-scoped data handling questionnaire, and contracted instruments such as a DPA or BAA.
Does enabling ZDR on LandingAI ADE satisfy the data handling requirements in a standard vendor security questionnaire? ZDR addresses the three questions most relevant to document AI assessments: documents are not stored at rest on LandingAI or sub-processor systems, processing occurs in-memory only, and customer data is not used for model training. ZDR scope covers the entire platform and all sub-processors, not only LandingAI's own systems. ZDR must be enabled on an eligible plan; it is not active by default. See ZDR documentation for technical scope details.
Can LandingAI ADE be used in a healthcare deployment without a BAA in place? No. HIPAA requires a signed Business Associate Agreement before any processing of Protected Health Information. LandingAI ADE's HIPAA compliance requires both a ZDR-enabled configuration and an executed BAA. Organizations that have not completed both steps should not route PHI through the platform. See ADE pricing and plan tiers for BAA eligibility by plan.
Is LandingAI ADE suitable for organizations that cannot route documents outside their own infrastructure? Yes. ADE is available as a containerized application deployable in the customer's own VPC, with no LandingAI access to document content during processing. This deployment satisfies zero-egress requirements by architecture. The hosted SaaS path with ZDR satisfies the same data privacy requirements for organizations that can use external infrastructure but require no post-processing retention.
What output evidence does LandingAI ADE provide to support a compliance audit trail? ADE produces bounding-box citations for every extracted value and parsed document chunk, grounding each output to a specific page and coordinate in the source document. This citation structure links each downstream decision, including a risk flag, a field value, or a classification, to the exact location in the source document where the supporting evidence appeared, providing the audit-ready traceability regulated workflows in financial services and healthcare require.