Camel performance in OpenShift
Issue
- I have a camel route that picks up a csv file once a day from an sftp location.This csv file contains 150K-250K lines give or take. I split this file by tokenizing it one by one using "\n" and parse a specific column.
- In my local, the camel app uses 1.5-2gb of heap memory based on jconsole. Now, I deploy this to OpenShift 3.11 and processing takes 45 minutes and up. I hooked up prometheus and grafana and yup it uses almost the same memory consumption in my local (1.5-2gb as well).
- What can I possibly do to optimise performance camel wise and openshift wise? Or what can i still do to improve performance?
- In OpenShift, I gave my pod min of 3gb (-xms) and 4gb (-xmx) already, but still terrible performance (takes 45 mins compared to 5-8 mins on my local).
Environment
- Red Hat Fuse 7.4.0
- OpenShift
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.