The target-duckdb loader sends data into DuckDB after it was pulled from a source using an extractor
Getting Started
Prerequisites
If you haven't already, follow the initial steps of the Getting Started guide:
Installation and configuration
- 
                Add the target-duckdb loader to your project
                using
                
:meltano add - 
                Configure the target-duckdb settings using
                
:meltano config 
meltano add loader target-duckdbmeltano config target-duckdb set --interactiveNext steps
Follow the remaining steps of the Getting Started guide:
If you run into any issues, learn how to get help.
Capabilities
This plugin currently has no capabilities defined. If you know the capabilities required by this plugin, please contribute!Settings
      The
      target-duckdb settings that are known to Meltano are documented below. To quickly
      find the setting you're looking for, click on any setting name from the list:
    
filepathbatch_size_rowsflush_all_streamsdefault_target_schemaschema_mappingadd_metadata_columnshard_deletedata_flattening_max_levelprimary_key_requiredvalidate_recordstemp_dir
      You can
      override these settings or specify additional ones
      in your meltano.yml by adding the settings key.
    
Please consider adding any settings you have defined locally to this definition on MeltanoHub by making a pull request to the YAML file that defines the settings for this plugin.
File Path (filepath)
- 
          Environment variable:
          
TARGET_DUCKDB_FILEPATH 
Path to the local DuckDB file.
Batch Size Rows (batch_size_rows)
- 
          Environment variable:
          
TARGET_DUCKDB_BATCH_SIZE_ROWS - 
          Default Value: 
100000 
Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into DuckDB.
Flush All Streams (flush_all_streams)
- 
          Environment variable:
          
TARGET_DUCKDB_FLUSH_ALL_STREAMS - 
          Default Value: 
false 
Flush and load every stream into DuckDB when one batch is full. Warning - This may trigger the COPY command to use files with low number of records.
Default Target Schema (default_target_schema)
- 
          Environment variable:
          
TARGET_DUCKDB_DEFAULT_TARGET_SCHEMA 
Name of the schema where the tables will be created. If schema_mapping is not defined then every stream sent by the tap is loaded into this schema.
schema_mapping (schema_mapping)
- 
          Environment variable:
          
TARGET_DUCKDB_SCHEMA_MAPPING 
Useful if you want to load multiple streams from one tap to multiple DuckDB schemas.
If the tap sends the stream_id in 
Add Metadata Columns (add_metadata_columns)
- 
          Environment variable:
          
TARGET_DUCKDB_ADD_METADATA_COLUMNS - 
          Default Value: 
false 
Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in postgres etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix SDC. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the _SDC_DELETED_AT metadata column. Without the add_metadata_columns option the deleted rows from singer taps will not be recognisable in DuckDB.
Hard Delete (hard_delete)
- 
          Environment variable:
          
TARGET_DUCKDB_HARD_DELETE - 
          Default Value: 
false 
When hard_delete option is true then DELETE SQL commands will be performed in DuckDB to delete rows in tables. It's achieved by continuously checking the _SDC_DELETED_AT metadata column sent by the singer tap. Due to deleting rows requires metadata columns, hard_delete option automatically enables the add_metadata_columns option as well.
Data Flattening Max Level (data_flattening_max_level)
- 
          Environment variable:
          
TARGET_DUCKDB_DATA_FLATTENING_MAX_LEVEL - 
          Default Value: 
0 
Object type RECORD items from taps can be transformed to flattened columns by creating columns automatically.
When value is 0 (default) then flattening functionality is turned off.
Primary Key Required (primary_key_required)
- 
          Environment variable:
          
TARGET_DUCKDB_PRIMARY_KEY_REQUIRED - 
          Default Value: 
true 
Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.
Validate Records (validate_records)
- 
          Environment variable:
          
TARGET_DUCKDB_VALIDATE_RECORDS - 
          Default Value: 
false 
Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by DuckDB. Enabling this option will detect invalid records earlier but could cause performance degradation.
Temporary Directory (temp_dir)
- 
          Environment variable:
          
TARGET_DUCKDB_TEMP_DIR 
Directory of temporary CSV files with RECORD messages.
Something missing?
This page is generated from a YAML file that you can contribute changes to.
Edit it on GitHub!Looking for help?
#plugins-general
    channel.
  Install
meltano add loader target-duckdbMaintenance Status
Meltano Stats
Keywords
