Use the <connector_name>_pipeline.py generated by the framework as inspiration
Step 4: Test the Ingestion Function Locally
To verify your changes, run the function locally (using DuckDB as a local target):
This step allows you to test the function and inspect the output data format.
Step 5: Generate the Source Schema
After running the pipeline locally (see above), generate a source schema definition:
Step 6: Create Transformation Models
Based on the schema files generated in step 5, boringdata can automatically generate corresponding SQL transformation models for each of the tables using Amazon Athena:
Step 7: (Optional) Add Workflow Automation
To coordinate the ingestion and transformation steps, add workflow automation using AWS Step Functions:
Step 8: Deploy the Infrastructure
Finally, deploy the project from the root directory: