by Bhuvanesh

### Tags

In my previous post, I explained how to unload all the tables in the RedShift database to S3 Bucket. But there was a limitation. We should export all the tables, you can’t specify some list of tables or all the tables in a specific schema. Its because of I can’t give comma separated values in RedShift stored procedure. But after spending few days I found a solution for this.

## What we’ll achieve here?

• Unload many tables to S3.
• Unload all the tables in a specific schema.
• Unload specific tables in any schema.
• All the tables in all the schema.

## Stored Procedure:

You can refer my previous post to understand how it works and the meaning for the variables I used.

Arguments Used:

• s3_path - Location to export the data.
• schema_name - Export the tables in this schema.
• table_list - List of tables to export.

To understand all other parameters read my previous post.

Hardcoded Items:

In the stored procedure, I have hardcoded the follow parameters.

• IAM ROLE - arn:aws:iam::123123123123:role/myredshiftrole
• Delimiter - |

You can customize:

Also, the following Items are hardcoded in the Unload query. You can get these things as variable or hardcoded as per your convenient.

• MAXFILESIZE - 100 MB
• PARALLEL
• GZIP

Create a table for maintain the unload history:

Stored Procedure:

I have less than 2048 tables, if you have more than that, just add few more select unions in the below portion.

## Example Commands:

Export all the tables in the schema sc1:

Export all the tables in the schema sc3,public:

Export the tables tbl1, tbl2 in the schema sc1:

Export the tbl4, tbl5 without specifying any schema name: (but if you have multiple tables in the same name, all tables will be exported )

Export all the tables in all the schemas:

• From the unload_history table you can get the COPY query to load into any other RedShift cluster.
• Caution: You need to install this procedure on all the databases to work seamlessly.