Synopsis
This chapter provides a vendor-neutral request for proposal template for adding or renewing CDN providers in a multi-CDN deployment. The template covers scope, objectives, technical and operational requirements, test plans, pilot expectations, and an evaluation rubric. Fields are expressed as placeholders and may be tailored to regional or product constraints.
Usage notes
The template is designed to be copied into a standalone document. Placeholders use angle brackets. Optional sections can be removed if out of scope. Terminology follows definitions in related chapters so responses remain comparable across providers.
RFP template
# <Project Name>: Multi-CDN Request for Proposal
Version: <v1.0>
Date: <YYYY-MM-DD>
Contact: <procurement@domain.example>
Confidentiality: Responses treated as confidential. Do not disclose without written consent.
## 1. Background and objectives
Service: <sites, APIs, media, regions>
Current state: <single CDN / multi-CDN>
Objectives: availability <target>, latency <targets by region>, compliance <jurisdictions>, cost controls <high level>.
Constraints: <origin locations, data residency, change windows, tooling>.
## 2. Scope
Traffic types: <HTML, APIs, images, video, large files>.
Regions and networks: <regions, key ASNs>.
Volumes: <baseline GB/mo, req/mo, peaks, seasonality>.
Growth assumptions: <low/base/high>.
Timeline: <RFP open>, Q&A window <dates>, pilot window <dates>, decision date <date>.
## 3. Technical requirements
3.1 Protocols and TLS
- TLS 1.3 required; TLS 1.2 permitted for legacy.
- HTTP/2 and HTTP/3 support with ALPN.
- OCSP stapling and CT compliance.
- Certificates: support RSA and ECDSA; ACME or managed issuance. Describe options.
3.2 Caching and content rules
- Cache key normalization: host, path, parameters model.
- Validators: ETag, Last-Modified, 304 behavior.
- Negative caching controls.
- Range request handling for large objects.
3.3 Purge and revalidation
- APIs for URL, prefix, tag/surrogate-key purge; soft purge support.
- Propagation characteristics: p50, p90, worst case by region.
- Rate limits and quotas; idempotency features.
3.4 Security controls
- WAF capabilities, rule packs, custom rules.
- Bot management features; challenge types and actions.
- Rate limiting keys, windows, and actions.
- Origin authentication: mTLS, signed headers, allowlist management.
- Secrets handling and key custody.
3.5 Media and optimization (if applicable)
- HLS/DASH, CMAF, low latency modes.
- DRM license path handling.
- Image transforms, client hints, deterministic URLs.
3.6 Observability
- Log and metric exports: schemas, latency, delivery guarantees.
- Real user and synthetic integration options.
- Request id propagation; correlation guidance.
3.7 API and automation
- Coverage of configuration via API, Terraform or equivalents.
- Rate limits, auth models, audit trails.
- Change approval and versioning features.
3.8 Compliance and residency
- Regional edge and storage footprint.
- Controls to constrain routing by jurisdiction.
- Log storage locality and aggregation options.
- Subprocessor list and change notification.
## 4. Performance and reliability objectives
Targets by region: <TTFB, success ratio, streaming startup and stall>.
Availability commitments: <monthly %, scope definition>.
Measurement method: <RUM, synthetic, joint>.
Credit model: <how credits are calculated and applied>.
## 5. Capacity and architecture
Edge footprint and peering summary.
Mid-tier hierarchy and shield options.
Anycast and regional policies.
Admission controls to protect origin during failover.
## 6. Support and operations
Support tiers, SLAs, languages, hours, and regions.
Escalation paths and incident communications.
Maintenance windows and notification practices.
Runbook collaboration and joint drills.
## 7. Pilot and validation
Pilot scope: <regions, traffic %, duration>.
Success criteria aligned to Section 4.
Change safety: stickiness, rollback, exposure gates.
Joint responsibilities and data sharing.
## 8. Commercial terms
Pricing structure: data transfer, requests, features, fixed fees.
Regional bands and staircase details.
Commit options, step-down rights, true-up mechanisms.
Contract term, co-termination preferences.
Exit rights for repeated objective failures.
## 9. Submission format
Single PDF and a machine-readable attachment (CSV or JSON) with required fields below.
Include security and compliance questionnaires as appendices.
Required fields (CSV headers):
provider, region, http3, tls13, purge_url_p90_s, purge_tag_p90_s, logs_latency_minutes, api_rate_limit_rps, waf_custom_rules, bot_challenges, residency_controls, price_gb_usd, price_req_per_million_usd
## 10. Evaluation criteria and weights
Provide responses that map to these criteria. Weights guide scoring.
- Technical fit and parity potential: 30
- Performance and reliability evidence: 20
- Operations and support model: 15
- Observability and API coverage: 15
- Compliance and residency: 10
- Commercial terms and flexibility: 10
## 11. Declarations
Data handling and privacy statements.
Security certifications and audit reports.
Subprocessor list and change process.
Insurance and liability coverage.
## 12. Legal terms and reservations
Confidentiality, validity period of offer, right to negotiate, and rejection without award.
## Appendix A. Test plan matrix
| Test area | Method | Regions | Success metric | Evidence |
|-----------|--------|---------|----------------|---------|
| Purge URL | Measure p50/p90/worst | <EU, US, APAC> | p90 < <s> | API ids + timings |
| Logs export | End-to-end latency | <regions> | p90 < <minutes> | sample records |
| HTTP/3 | A/B sessions | <regions> | QoE equal or better | RUM deltas |
| Range requests | Large file resume | <regions> | No partial stalls | probe traces |
| WAF parity | Positive/negative cases | <regions> | Same outcomes | rule hit logs |
## Appendix B. Vendor response template (structured)
- Company overview: <text>
- Technical responses: map to sections 3.1 through 3.8 with headings.
- Reliability data: historical availability, incident summaries.
- Performance data: regional TTFB and throughput distributions.
- Observability: schemas and samples.
- API coverage: endpoints list and limits.
- Compliance: residency controls and attestations.
- Commercial: rate cards and options.
- Exceptions and assumptions: <text>
- Pilot plan proposal: <text>
Evaluation rubric
The following rubric aligns scoring with requirements and permits like-for-like comparison between providers. Scores are assigned per criterion on a 0 to 5 scale and multiplied by weights defined in the template. Evidence should be auditable and reproducible. Discretionary adjustments are documented with rationale to maintain parity across respondents.
Related chapters
Architecture choices are described in /multicdn/architecture-patterns/. Routing policy and precedence appear in /multicdn/traffic-steering/. Caching and purge behavior appear in /multicdn/cache-consistency/. TLS and security requirements appear in /multicdn/tls-certificates/ and /multicdn/security-parity/. Pilot and rollout practices appear in /multicdn/testing-canarying/. Monitoring and validation appear in /multicdn/monitoring-slos/. Compliance expectations appear in /multicdn/compliance/.
Further reading
Standards include RFC 9110 and RFC 9111 for HTTP semantics and caching, RFC 8446 for TLS 1.3, and RFC 8555 for ACME. Procurement references for service availability credits and true-up clauses provide structure for commercial terms.