We have two load balancers on the community cluster named _fh-lb-comm1-ocp-master_ currently:
- one is a classic ELB which has the 3 openshift master, SSL is terminated here for the console (which also Openshift routes also I assume). - one is an application ELB, I'm not familiar with this type of ELB, I couldn't see how or if this was actually being used.
I'm on PTO for the next couple of weeks, so not really in a position to help out here until I get back. I was reading back through [~jessesarn] update, in order for websockets to work within the community cluster, we would need to have a AWS classic ELB which fronts something running at layer 4 (The AWS ELB classic fronting the Openshift master instances is ok here?), have that in turn front instances which have SSL implemented within them. So we should disable TLS termination on the route for the backend instance route , and then terminate within the instance pod instead. Am I understanding that correctly [~jessesarn]? |
|