Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cora.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Redshift integration makes warehouse-resident customer and product data available to Cora for analytics, enrichment, and downstream workflows. Rather than granting Cora direct access to your Redshift cluster, the recommended approach is to export data from Redshift and deliver it to Cora via SFTP. This keeps your warehouse isolated within your network while allowing controlled, auditable data exchange.
This integration is intentionally discussion-led. The exact setup depends on your Redshift schema, data volume, update cadence, and security requirements. Your Cora contact will work with your data and IT teams to align on the approach before any implementation begins.

Why SFTP instead of direct database access

Using SFTP as the handoff mechanism offers several advantages over providing Cora with direct database credentials or network access. Strong security boundary
  • No direct database credentials, VPC peering, or inbound network access required
  • Read-only, file-based data sharing
Predictable governance
  • Explicit control over which datasets and fields are exported
  • Easier auditing and change management
Operational simplicity
  • Compatible with existing batch pipelines and data ops workflows
  • Avoids coupling Cora to your warehouse schema or query patterns
Clear ownership model
  • Your team owns data preparation and export
  • Cora owns ingestion and downstream processing
If your use case requires near-real-time access or ad-hoc querying of large datasets, a direct Redshift integration can be evaluated as a follow-up option.

What we will align on during the setup discussion

In a working session with your data and IT owners, your Cora contact will align with your team on:
  • Which Redshift datasets are in scope (accounts, usage metrics, events, health scores, etc.)
  • File format and structure (CSV, Parquet, JSON, compression)
  • Export cadence (daily, intra-day, or on-demand)
  • Delivery guarantees and retry behavior
  • PII handling and data minimization expectations
  • Validation and reconciliation approach

Implementation path

Your team is responsible for generating and delivering files to an agreed-upon SFTP endpoint. The high-level steps are:
1

Define export views in Redshift

Create stable views or queries that represent the datasets to be shared. Avoid exposing raw or intermediate tables unless explicitly required.
2

Generate export files

Produce files on a scheduled basis. Include deterministic identifiers so that Cora can process files idempotently.
3

Deliver files via SFTP

Upload files to the agreed directory structure. Follow the naming conventions agreed during setup — for example, incorporating the dataset name, date, and version.
4

Cora ingestion and validation

Cora ingests files, validates schema and row counts, and surfaces any errors. Failed or partial ingestions are reported back to your team for remediation.
5

Downstream usage in Cora

Successfully ingested data is mapped into Cora workflows, agents, and reporting surfaces.

Notes and guardrails

Discussion first — The exact schema, file format, and cadence are finalized collaboratively with your Cora contact. Do not begin implementation before this alignment is complete.
Data minimization — Only export fields that are required for operational use in Cora. Avoid including columns that are not needed downstream.PII handling — Sensitive fields must be explicitly reviewed and approved before inclusion in any export. Work with your security and compliance teams to identify fields that require masking, tokenization, or exclusion.