# Key Concepts

This section explains the core concepts and architecture of this template.

## Code Structure

The template's code is organized into three main components:

```
📁
├── 📁 pipelines/             # Data pipelines:
│   ├── 📁 ingest/                      # Data ingestion layer
│   ├── 📁 transform/                   # Data transformation layer
│   └── 📁 orchestrate/                 # Workflow orchestration layer
│
├── 📁 base/                  # Cloud infrastructure
│   ├── 📁 aws/                         # Cloud provider resources (VPC, IAM, etc.)
│   └── 📁 snowflake/                   # Data warehouse resources
│
└── 📁 live/                  # Environment-specific deployment configuration
```

Each component is documented separately here:

{% content-ref url="../project-structure/pipelines" %}
[pipelines](https://docs.boringdata.io/template-aws-snowflake/project-structure/pipelines)
{% endcontent-ref %}

{% content-ref url="../project-structure/aws" %}
[aws](https://docs.boringdata.io/template-aws-snowflake/project-structure/aws)
{% endcontent-ref %}

{% content-ref url="../project-structure/snowflake" %}
[snowflake](https://docs.boringdata.io/template-aws-snowflake/project-structure/snowflake)
{% endcontent-ref %}

{% content-ref url="../project-structure/live" %}
[live](https://docs.boringdata.io/template-aws-snowflake/project-structure/live)
{% endcontent-ref %}

## Data Flow

1. Serverless function ingest data to S3
2. Snowpipes copy data from S3 into tables in Snowflake (landing tables)
3. Data transformations are applied to create staging and mart tables using SQL transformations in [dbt](https://docs.getdbt.com/)

## Data Pipeline Architecture

Our data platform follows a layered architecture:

### 1. Data Ingestion Layer

For each source, the ingestion layer is structured as follows:

```
📁 pipelines/
├── 📁 ingest/
│   ├── 📁 <source>-ingestion/      # Core ingestion logic
│   │
│   └── <source>_source_schema.yml   # Table schema definitions (YAML)
│
└── <source>_*.tf                   # Infrastructure definition (serverless functions, containers, etc.)
```

Each source has:

* A folder `pipelines/ingest/<source>-ingestion/` containing the core ingestion logic packaged in a container
* Infrastructure as Code files in `pipelines/*tf` for deploying this ingestion container (as serverless functions (AWS Lambda) or container tasks ([Amazon ECS](https://aws.amazon.com/ecs/)))
* A YAML file `pipelines/<source>_source_schema.yml` for the management of the data warehouse tables

{% hint style="info" %}
Schema management is handled through YAML files, making it easy to define and evolve table structures. More info in [#snowflake-schema-management](https://docs.boringdata.io/template-aws-snowflake/help/faq#snowflake-schema-management "mention")
{% endhint %}

The template comes with an example data ingestion pipeline deployed as a serverless function using [dlt](https://dlthub.com/docs/intro); more details here:

{% content-ref url="../project-structure/pipelines/chess-ingestion" %}
[chess-ingestion](https://docs.boringdata.io/template-aws-snowflake/project-structure/pipelines/chess-ingestion)
{% endcontent-ref %}

### 2. Data Transformation Layer

The transformation layer is a SQL-based project that transforms the data into analytics-ready tables using [dbt](https://docs.getdbt.com/):

This project is located in the `pipelines/transform` folder and uses [dbt](https://docs.getdbt.com/) as the transformation framework:

```
📁 pipelines/
├── 📁 transform/                   # SQL transformation project
│   ├── 📁 models/
│   │   ├── 📁 staging/            # Raw table connections
│   │   └── 📁 marts/              # Transformations
│   │
│   ├── dbt_project.yml
│   └── Dockerfile                  # For container deployment
│
└── ecs_task_dbt.tf                 # Infrastructure for transformation tasks
```

This transformation project runs on container infrastructure ([Amazon ECS](https://aws.amazon.com/ecs/) Fargate) and connects directly to [Snowflake](https://www.snowflake.com/en/).

More details on how this transformation project is structured here:

{% content-ref url="../project-structure/pipelines/transform" %}
[transform](https://docs.boringdata.io/template-aws-snowflake/project-structure/pipelines/transform)
{% endcontent-ref %}

### 3. Workflow Orchestration Layer

The orchestration layer coordinates the execution of the ingestion and transformation layers using workflow automation.

This template proposes an example orchestration using [AWS Step Functions](https://aws.amazon.com/step-functions/):

```
📁 pipelines/
├── 📁 orchestrate/
│   └── <source>_step_function.json  # Workflow definition
│
└── <source>_step_function.tf        # Creates an orchestration workflow in [AWS Step Functions](https://aws.amazon.com/step-functions/)
```

<div align="center"><img src="https://2081077372-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FMV8jwUDrYLitfvOBJqeO%2Fuploads%2Fgit-blob-bd6a5c64e90abee370bc98508fd4e6c70fda1f6b%2Fgrafik.png?alt=media" alt="Chess Pipeline Workflow in AWS Step Function" width="375"></div>

## Deployment

This template is ready to be deployed.

The stack deployment is structured in 2 steps:

* First, the infrastructure modules (base/ and pipelines/) are deployed using [Terragrunt](https://terragrunt.gruntwork.io/) for infrastructure management
* Then, the containers for the ingestion and transformation layers are built and pushed to the container registry ([Amazon ECR](https://aws.amazon.com/ecr/))

<figure><img src="https://2081077372-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FMV8jwUDrYLitfvOBJqeO%2Fuploads%2Fgit-blob-13978472720de0e10361c210639f5558878e074b%2Fgrafik.png?alt=media" alt="" width="375"><figcaption></figcaption></figure>

If you want to get started quickly and deploy the template from your machine, follow this guide:

{% content-ref url="get-started" %}
[get-started](https://docs.boringdata.io/template-aws-snowflake/introduction/get-started)
{% endcontent-ref %}

To get started deploying from [GitHub Actions](https://github.com/features/actions) CI/CD, head there:

{% content-ref url="../guides/production-deployment" %}
[production-deployment](https://docs.boringdata.io/template-aws-snowflake/guides/production-deployment)
{% endcontent-ref %}

## Makefile

The template is composed of many Makefiles providing utilities.

Here are some examples:

* `make deploy` in the root folder will deploy the template from your machine
* `make build` in a folder with a Dockerfile will build the container
* `make local-run` will run the code locally
* etc.

Everywhere you see a Makefile, run `make` and the list of possible actions will be listed.

{% content-ref url="get-started" %}
[get-started](https://docs.boringdata.io/template-aws-snowflake/introduction/get-started)
{% endcontent-ref %}
