JON 3.2 upgrade issue - The Cluster status of the 2nd node shows "DOWN"
We upgraded JON from 3.1.2 to 3.2 recently in a HA environment. The cluster status of the 2nd node has been down since the upgrade.
We tried a few things but no luck so far.
1) Updated cassandra.yaml & rhq-storage-auth.conf and ensure both IPs of the 2 nodes in the 2 config files (that didn't work)
-
Ran CLI StorageNodeManager.runClusterMaintenance() (didn't work)
-
Ran the following nodetool repair commands and didn't work either:
nodetool -p 7299 repair system_auth
nodetool -p 7299 repair rhq
It seems that no matter what we try, the Cluster of the 2nd node always shows "DOWN". Might the data on the 2nd node be corrupted ? Is it possible to scrub the 2nd storage node and rebuild from scratch ?
Sherry