I'm attempting to copy a table from an on-premises SQL Server to Azure Storage using a data pipeline in Fabric. The table consists of 5 columns and 1000 rows, with all columns being of type varchar. The primary key is also a string (a combination of varchar and uniqueString). When I tried to enable dynamic range partitioning, I encountered the error "SqlParallelNotSupportedDataType". Below is the structure of my table:
| Column A | Column B | Column C | Column D |
|---|---|---|---|
| unqStr1 | value1 | value2 | value2 |
| unqStr2 | value3 | value5 | value1 |
| unqStr3 | value3 | value5 | value1 |
| unqStr4 | value3 | value5 | value1 |
| unqStr5 | value3 | value5 | value1 |
Is there a way to copy this type of data to Azure Storage using multiple threads to improve data read performance? Specifically, I'm looking for a solution that leverages parallel threads for better performance. Any guidance or suggestions would be greatly appreciated.
There's a sink setting called "Max rows per file," which I believe is intended for writing. However, my primary goal is to achieve better performance when reading from the database.




