Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem. Parquet files make data easy to access due to their language-independent characteristics. The Syncfusion Data Integration Platform is flexible for reading and writing records in the Parquet file format.
The Data Integration Platform plays an important role as an intermediator for reading and writing records in Parquet files using the FetchParquet and PutParquet processors.
The connected workflow will execute automatically in the background using a scheduling strategy (timer driven, CRON driven, event driven, or run once), or on-demand via an API or the Start and Stop buttons in the Operate panel.