ETL takes time and it's a lot to maintain. Sometimes it breaks when you didn't expect a string to contain emojis. You might decide the transformation needs to be changed, which means you need to refresh all your data. So what can you do to avoid this?
At Zervant , we currently use databricks for our ETL processes, and it's quite great. However, there's been some difficulty in setting up scripts that work both locally and on the databricks cloud. Specifically, databricks uses their own prorpietary libraries to connect to AWS S3 based on AWS hadoop 2.7. That version does not support accessing using AWS profiles. Internally, we use SSO to create temporary credentials for an AWS profile that then assumes a role. Therefore, reading the ACCESS_ID and ACCESS_SECRET from the .credentials file is something we don't want to do. In order to accomplish this, we need to set two hadoop configurations to the Spark Context fs.s3a.aws.credentials.provider com.amazonaws.auth.profile.ProfileCredentialsProvider This is done by running this line of code: sc._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.profile.ProfileCredentialsProvider") Note! You need to set your environment var...
You want to insert data to a table, but if a corresponding row already exists (by some rule, e.g. unique key) you want to update that instead of adding a new row, keeping the dataset's unique requirements intact. That's an "UPDATE AND INSERT" operation, or UPSERT. Some SQL languages have native support for it. PostgreSQL has UPSERT as native. Also MySQL supports the operation with INSERT and ON DUPLICATE KEY UPDATE. How do you do UPSERT on Snowflake? Here's how: Snowflake UPSERT i.e. MERGE operation Snowflake's UPSERT is called MERGE and it works just as conveniently. It just has a different name. Here's the simple usage: MERGE INTO workspace.destination_table d USING workspace.source_table s ON d.id = s.id AND d.val1 = s.val1 WHEN MATCHED THEN update SET d.val2 = s.val2, d.val3 = s.val3 WHEN NOT MATCHED THEN INSERT (id, val1, val2, val3) VALUES (s.id, s.val1, s.val2, s.val3); Here the destination_table and source_table are of similar form,...