We are using Azure Data Factory to copy data from on-premise SQL table to a REST endpoint, for example, Google Cloud Storage. Our source table has more than 3 million of rows. Based the document https://docs.microsoft.com/en-us/azure/data-factory/connector-rest#copy-activity-properties, the default value for writeBatchSize (number of records write to the REST sink per batch) is 10000. I tried to increase the size up to 5,000,000 and 1,000,000, and noticed the final file size are the same. It shows that not all the 3M records were written to GCS. Does anyone know what is max size for writeBatchSize? The pagination seems only for the case that using REST as source. I wonder if there is any workaround for my case?
2.1m questions
2.1m answers
60 comments
57.0k users