Forum Discussion

ameyer-pnra's avatar
ameyer-pnra
Icon for Nimbostratus rankNimbostratus
May 03, 2024

Rundeck ansible F5 errors

We use rundeck to deploy some code and within that code we take advantage of the ansible to remove hosts in and out of the respective pool in the F5.  Recently we upgraded to a new version of rundeck and the latest version of ansible.  

 

I've seen other posts where someone took out the delegate_to: line and that fixed it.  I can do that or install a legacy version of ansible.

 

Here is debug output from the failed task:

TASK [f5_modify : Disable from pool -Test-API-8080] *********

fatal: [hostname.example.com -> localhost]: FAILED! => {"changed": false, "msg": "argument 'server_port' is of type <class 'NoneType'> found in 'provider'. and we were unable to convert to int: <class 'NoneType'> cannot be converted to an int"}

PLAY RECAP *********************************************************************

hostname.example.com : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

ansible code:

---
- name: "Disable from pool {{ pool_name }}"
  bigip_pool_member:
      provider:
        server: "{{ f5_ipaddress }}"
        user: "{{ f5_user }}"
        password: "{{ f5_pwd }}"
        validate_certs: "no"
        transport: "rest"
      state: forced_offline
      pool: "{{ pool_name }}"
      partition: "Common"
      host: "{{ansible_default_ipv4.address}}"
      port: "{{ pool_member_port }}"
  delegate_to: localhost
  when: action == "disable"
  tags: f5_manage


# Enable pool member again if the deploy type is rolling, or the env is not prod
- name: "Enable in pool {{ pool_name }}"
  bigip_pool_member:
      provider:
        server: "{{ f5_ipaddress }}"
        user: "{{ f5_user }}"
        password: "{{ f5_pwd }}"
        validate_certs: "no"
        transport: "rest"
      state: enabled
      pool: "{{ pool_name }}"
      partition: "Common"
      host: "{{ansible_default_ipv4.address}}"
      port: "{{ pool_member_port }}"
  delegate_to: localhost
  when: action == "enable"
  tags: f5_manage

- name: Wait for clients to gracefully bleed off the server
  wait_for:
    host: "{{ansible_default_ipv4.address}}"
    port: "{{ pool_member_port }}"
    delay: 5
    timeout: 120
    state: drained
  ignore_errors: True
  when:
     - action == "disable"
     - deploy_type == "rolling"
  tags: f5_manage

 

1 Reply

  • I feel you have used incorrect parameter as pool member host: "{{ansible_default_ipv4.address}}", it will be member. Find the correct details and modify accourdingly. 

    - name: "Disable from pool {{ pool_name }}"
      bigip_pool_member:
        provider:
          server: "{{ f5_ipaddress }}"
          user: "{{ f5_user }}"
          password: "{{ f5_pwd }}"
          validate_certs: "no"
          transport: "rest"
        state: forced_offline
        pool: "{{ pool_name }}"
        partition: "Common"
        member: "{{ pool_member_ip }}"
        port: "{{ pool_member_port }}"
      delegate_to: localhost
      when: action == "disable"
      tags: f5_manage

     

    Still issue let me know.. 

    FYI

    The parameter host in the bigip_pool_member module should specify the IP address or hostname of the pool member you want to disable, but you have used {{ansible_default_ipv4.address}}. This variable typically represents the IP address of the Ansible control machine, not the pool member's IP address.

    To fix this, you should replace host: "{{ansible_default_ipv4.address}}" with member: "{{ pool_member_ip }}", assuming you have a variable pool_member_ip that holds the IP address of the pool member you want to disable.