1.) Should we be storing the filled-out, flattened PDF template in Azure before involving it in the process to make the final report? Or is it unlikely this is part of our performance issue? Right now, that filled-out PDF only exists in memory/the final byte array. We do not save it separately. It only exists in the context of this report. |
While saving the documents in byte array(memory stream), it will increase on time memory and affect the performance.
Instead of that ,you can save the document in FileStream and store the temporary file server, If it is possible. Finally you can upload the azure sever and merge the documents. It will reduce the on time memory and improve the performance. |
2.) Is there a best practice for loading/downloading PDFs from Azure to work with your libraies? Our code right now doesn't seem to be using SyncFusion-related libraries at those steps. If there are attachments for the user, it looks like we're using just Microsoft classes for getting the blob container, container reference, block blob reference for the actual file, and then using DownloadToByteArray of the CloudBlockBlob for the PDFs, and we are doing that each time. I've seen some super short code examples on the site where you just use DirectoryInfo to get all the PDFs in a folder. I think that could work with our existing set up, since we do store the attachments for a given user in one folder, but tying that into Azure setup is less clear to me. |
In our library , we could not have any API to upload or download in azure server. You can use Microsoft to upload or download The files. |
3.) What exceptions and tools do you have in the SyncFusion libraries that could help us pinpoint where we're going wrong with performance in building this report? I've been trying to hunt those down. I've added a few catch's forPdfException but am a bit at a loss as to how to more clearly discover where else our code might be inefficient.
|
Kindly share the code snippet for filling and merging process in your project. It will helpful to analyze and pinpoint performance issue.
Kindly share the below details to validate the issue,
|
4.) Based off the context of what we're trying to do, is there any other advice you can give for fixing our process so it time out? We do close the PdfLoadedDocuments after Saves, and I believe that's where you get possible benefit if you use this setting:
document.EnableMemoryOptimization = true;
But I've noticed with our existing process, document.EnableMemoryOptimization = true does not result in any real consistent time difference in how long the same report is generated.
|
As of now , you have properly processing to optimize the performance. Kindly share the code snippet to find which operation you have performing. It will helpful to further analyze on this. |
5.) Is there any point where you'd say we should actually break up this report into more reports? e.g. what is the upper limit for your product to merge documents without timing out? Would it be something like 200, 400, 1,000 PDFs?
From what I've seen on the best practices pages, these are important:
- With bigger PDFs and more PDFs, not opening everything into memory at once - combining bit by bit - seems to be important for performance.
- Usedocument.EnableMemoryOptimization = true; if it seems to help reduce time - but when I tested just this with out existing code, I saw no noticeable difference in performance.
|
In our pdf library merging speed and performance based on the environment memory size(RAM) and processing speed.
If the document contains more than 500 or 1000 pages means ,we have merge 10, 10 pages and saved in the temporary folder and finally we have merge whole documents into single pdf. It will reduce the on time memory and improve the performance.
|