A. Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.
B. Stream real-time data into Redshift temporary tables before loading the data into permanent tables.
C. For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.
D. For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.
E. Optimize analytics SQL queries to use sort keys.
F. Avoid using temporary tables in analytics SQL queries.
- Awsexamhub website is not related to, affiliated with, endorsed or authorized by Amazon.
- Trademarks, certification & product names are used for reference only and belong to Amazon.
- Trademarks, certification & product names are used for reference only and belong to Amazon.
Join the Discussion
You must be logged in to post a comment.