ETL takes time and it's a lot to maintain. Sometimes it breaks when you didn't expect a string to contain emojis. You might decide the transformation needs to be changed, which means you need to refresh all your data. So what can you do to avoid this?
At Zervant , we currently use databricks for our ETL processes, and it's quite great. However, there's been some difficulty in setting up scripts that work both locally and on the databricks cloud. Specifically, databricks uses their own prorpietary libraries to connect to AWS S3 based on AWS hadoop 2.7. That version does not support accessing using AWS profiles. Internally, we use SSO to create temporary credentials for an AWS profile that then assumes a role. Therefore, reading the ACCESS_ID and ACCESS_SECRET from the .credentials file is something we don't want to do. In order to accomplish this, we need to set two hadoop configurations to the Spark Context fs.s3a.aws.credentials.provider com.amazonaws.auth.profile.ProfileCredentialsProvider This is done by running this line of code: sc._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.profile.ProfileCredentialsProvider") Note! You need to set your environment var...
The ApplyMapping class is a type conversion and field renaming function for your data. To apply the map, you need two things: A dataframe The mapping list