Part 3: vRA 8.1 / vIDM 3.3 HA with F5 Deployment

Introduction

This multi-part blog focuses on deploying vRA 8.1 HA, vIDM 3.3.2 HA using an F5 BIG-IP LTM load balancer. The context for the material is to call out pitfalls, direction and resolution to issues with an HA vRA 8.1 deployment. Specifically, these blogs call out additional configuration for vIDM HA scale out with vRA 8.1 HA. This content is broken into four parts:

Post vRA and vIDM HA install configurations

In the previous post, we deployed vRSLCM (x 1 node), vRA (x 3 nodes) and vIDM (x 3 nodes). Having pre-configured the F5 BIG-IP LTM in part 1, our F5 network map should reflect the following

  • vRA showing x 3 nodes as green
  • vIDM showing x 1 node as green and x 2 nodes as offline or unavailable

At this time, the VIP FQDN for vRA 8.1 is responsive and will accept incoming requests.

Another validation in vRSLCM is to navigate to Environments (after having logged in) reviewing two environments (vRA) and (vIDM) as shown below

Before consuming vRA services, we need to continue post install configuration for vRSLCM, vRA and vIDM. Most of these post install configurations are completed through vRSLCM and includes the following

  1. Import vRA SSL Cert, vIDM SSL Cert and vRSLCM SSL Cert
  2. Change SSL Certificate for vRSLCM
  3. Update (as needed) NTP, DNS, Binary Mapping, Proxy, Time Settings and/or System details
  4. Replace SSL Certificate on vIDM Node
  5. Replace SSL Certificate on vRA Nodes
  6. Re-Trust with Identity Manager
  7. Re-validate F5 BIG-IP LTM configurations supporting vIDM HA scale out
  8. vIDM Configuration supporting vIDM scale out (configured on vIDM node)
    • Add Trusted Root CA SSL certificate to vIDM (configured on vIDM Node)
    • Update vIDM FQDN to support the Load Balancer VIP FQDN (configured on vIDM Node)
  9. Re-Register with Identity Manager
  10. Validate vRSLCM can resolve vIDM and vRA FQDNs (including F5 VIP FQDNs)

1.) Import vRA SSL Cert, vIDM SSL Cert and vRSLCM SSL Cert

Navigating to vRealize Suite Lifecycle Manager > My Services > Locker


Select Import Certificate and either select Certificate file or paste Private Key and Certificate Chain.

Here is an example of the vIDM SSL cert. A few things to point out

Its recommended to configure the SSL cert Common Name as the F5 VIP FQDN, for example, in this environment, usvidm00.ilab.int is the DNS FQDN + F5 Virtual Server VIP FQDN + SSL Cert Common Name. Next, Subject Alt Names (SAN) are configured as well. usvidm00.ilab.int, usvidm01.ilab.int, usvidm02.ilab.int and usvidm03.ilab.int.

2.) Change SSL Certificate for vRSLCM

Navigating back to vRealize Suite Lifecycle Manager > My Services > Lifecycle Operations > Settings > Change Certificate > Replace Certificate

Select the appropriate SSL Certificate (the ones imported a few steps back)

Complete the Precheck and assuming no errors (if errors, resolve and go through process again replacing the certificate) Click Finish

3.) Update (as needed) NTP, DNS, Binary Mapping, Proxy, Time Settings and/or System details

Not shown – as needed to support your vRSLCM environment, configure necessary settings. **important** make sure to review NTP settings and make sure all nodes vRSLCM, vRA and vIDM all are in time sync.

4.) Replace SSL Certificate on vIDM Node

Navigate to Environments > globalenvironment (default) > View Details > … Replace Certificate

Steps (5) to replace the vIDM SSL Certificate

  1. Review Current Certificate (not shown)
  2. Select Certificate (not shown)
  3. Retrust Product Certificate
  4. Opt-in for Snapshot (not shown)
  5. Precheck (not shown)

The vIDM Cert replacement process can be reviewed through > Navigate to vRSLCM > Requests > Request Title for the vIDM Cert Update

There are a number of stages to monitor in this process. In your environment, the time varies depending on the infrastructure. The great part of vRSLCM is error recovery options and the process is picked up again assuming correcting the error.

5.) Replace SSL Certificate on vRA Nodes

The same process of replacing the vIDM SSL cert is applied to vRA Nodes. Navigate to vRSLCM > Environments > Name of vRA environment > View Details > Replace Certificate

6.) Re-Trust with Identity Manager

Once updating the SSL cert on vRA Nodes is complete, Navigate back to vRSLCM > Environments > vRA > View Details > … > Re-Trust with Identify Manager

During the Re-Trust with Identify Manager, the process re-initalizes the vRA Cluster as shown below

7.) Re-validate F5 BIG-IP LTM configurations supporting vIDM HA scale out

Up to this point, all configurations has been initiated via vRSLCM. Next, we will configure the vIDM Node to support the vIDM Scale Out for HA.

  • Add Trusted Root CA SSL certificate(s) to vIDM (vIDM Node)

Navigate to the FQDN of vIDM Node # 1 (stood up as part of the previous steps), logging in with the default configuration admin account and selecting the Administration Console shown below

Navigate to Appliance Settings > Manage Configuration > authenticate using the default configuration admin password (not shown) >

Navigate to Install SSL Certificates > Trusted CAs

It may be necessary to add Trusted Root CA Certs, or intermediate cert and/or the F5 Trusted Root CA. vIDM leverages the F5 ClientSSL Profile for the vIDM VIP FQDN. For example, below is shown adding a couple additional SSL certs

Next, navigate to Identity Manager FQDN

  • Change the Identity Manager FQDN URL from the single vDIM FQDN to the F5 VIP FQDN

In this example, the vIDM Single Node deployed is https://usvidm01.ilab.int, change this to the F5 VIP FQDN https://usvidm00.ilab.int and click Save

  • **IMPORTANT** Check for Time Synchronization either NTP or Host Time

9.) Re-Register with Identity Manager

Navigate back to vRSLCM > Environments > vRA Environment > View Details > … Re-Register with Identity Manager

Once complete, its recommended to triple check F5 BIG-IP LTM ClientSSL profile, HTTP Profile, Certificates on the F5, vRA and vIDM. Note, having completed the above steps, vIDM is responsive to the F5 VIP FQDN.

This can be validated navigating to the F5 VIP FQDN and reviewing F5 statistics showing traffic passing through the F5 Load Balancer. The only difference is the F5 has only x 1 vIDM node green in the pool. You can check this via F5 > Local Traffic > Pool List > Statistics > vIDM Pool > + Pool

10.) Validate vRSLCM can resolve vIDM and vRA FQDNs (including F5 VIP FQDNs)

Validate via SSH on the vRSLCM appliance and ping + nslookup each F5 VIP FQDN, vRA FQDN, vIDM FQDN. Note, ping will fail on vIDM nodes 2 and 3; however, FQDN will resolve. For example, see below

login as: root
VMware vRealize Suite Lifecycle Manager Appliance on Photon
root@usvralcm81.ilab.int's password:
root@usvralcm81 [ ~ ]# nslookup
> usvidm00.ilab.int
Server:         10.10.4.200
Address:        10.10.4.200#53Name:   usvidm00.ilab.int
Address: 10.10.4.180
> usvidm01.ilab.int
Server:         10.10.4.200
Address:        10.10.4.200#53Name:   usvidm01.ilab.int
Address: 10.10.4.181
> usvidm02.ilab.int
Server:         10.10.4.200
Address:        10.10.4.200#53Name:   usvidm02.ilab.int
Address: 10.10.4.182
*************
root@usvralcm81 [ ~ ]# ping usvidm00.ilab.int
PING usvidm00.ilab.int (10.10.4.180) 56(84) bytes of data.
64 bytes from usvidm00.ilab.int (10.10.4.180): icmp_seq=1 ttl=255 time=0.645 ms
64 bytes from usvidm00.ilab.int (10.10.4.180): icmp_seq=2 ttl=255 time=0.497 ms
64 bytes from usvidm00.ilab.int (10.10.4.180): icmp_seq=3 ttl=255 time=0.555 ms
^C
--- usvidm00.ilab.int ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 24ms
rtt min/avg/max/mdev = 0.497/0.565/0.645/0.066 ms
root@usvralcm81 [ ~ ]# ping usvidm01.ilab.int
PING usvidm01.ilab.int (10.10.4.181) 56(84) bytes of data.
64 bytes from usvidm01.ilab.int (10.10.4.181): icmp_seq=1 ttl=64 time=4.03 ms
64 bytes from usvidm01.ilab.int (10.10.4.181): icmp_seq=2 ttl=64 time=0.737 ms
^C
--- usvidm01.ilab.int ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.737/2.383/4.030/1.647 ms
root@usvralcm81 [ ~ ]# ping usvraha8100.ilab.int
PING usvraha8100.ilab.int (10.10.4.170) 56(84) bytes of data.
64 bytes from usvraha8100.ilab.int (10.10.4.170): icmp_seq=1 ttl=255 time=0.992 ms
64 bytes from usvraha8100.ilab.int (10.10.4.170): icmp_seq=2 ttl=255 time=3.80 ms
64 bytes from usvraha8100.ilab.int (10.10.4.170): icmp_seq=3 ttl=255 time=0.524 ms
^C
--- usvraha8100.ilab.int ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 0.524/1.771/3.798/1.446 ms
root@usvralcm81 [ ~ ]# ping usvraha8101.ilab.int
PING usvraha8101.ilab.int (10.10.4.171) 56(84) bytes of data.
64 bytes from usvraha8101.ilab.int (10.10.4.171): icmp_seq=1 ttl=64 time=1.42 ms
64 bytes from usvraha8101.ilab.int (10.10.4.171): icmp_seq=2 ttl=64 time=0.764 ms

Summary

In this part 3 post, we reviewed the post install configuration for vRSLCM and updating vRA and vIDM SSL cert. Next, configured vIDM to support vIDM Scale out for HA. In the next post – part 4, we’ll scale out vIDM for an HA configuration.