I am trying to design a sort of data pipeline to migrate my Hive tables into BigQuery. Hive is running on an Hadoop on premise cluster. This is my current design, actually, it is very easy, it is just a shell script:
for each table source_hive_table {
- INSERT overwrite table
target_avro_hive_table
SELECT * FROM source_hive_table;
- Move the resulting avro files into google cloud storage using
distcp
- Create first BQ table:
bq load --source_format=AVRO your_dataset.something something.avro
- Handle any casting issue from BigQuery itself, so selecting from the table just written and handling manually any casting
}
Do you think it makes sense? Is there any better way, perhaps using Spark?
I am not happy about the way I am handling the casting, I would like to avoid creating the BigQuery table twice.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…