When orchestrating multi-server deployments with Ansible, a common challenge arises: persisting variables across different playbooks. In my case, I needed the commandserver
fact generated in playbook_commandserver.yml
to remain available in subsequent playbooks like playbook_agent.yml
.
While set_fact
works well within a single play, its scope is limited to the current playbook execution. Here's the problematic code snippet:
- name: Set hostname of command server as fact
set_fact: commandserver="{{ cs.stdout }}"
This variable disappears once the playbook completes, making it unavailable for subsequent includes.
1. Using hostvars with a Special Host
Create a dummy host group to store cross-playbook variables:
- name: Store commandserver in persistent hostvars
add_host:
name: "vars_keeper"
commandserver: "{{ cs.stdout }}"
groups: "persistent_vars"
- name: Access in later playbook
debug:
var: hostvars['vars_keeper']['commandserver']
2. Leveraging Redis for Distributed Fact Storage
For complex deployments, use Redis as a shared fact cache:
- name: Install redis facts cache plugin
ansible.builtin.get_url:
url: https://raw.githubusercontent.com/ansible-collections/community.general/main/plugins/cache/redis.py
dest: "{{ lookup('env','HOME') }}/.ansible/plugins/cache/redis.py"
- name: Configure redis in ansible.cfg
lineinfile:
path: /etc/ansible/ansible.cfg
line: "fact_caching = redis"
3. Temporary Files as a Simple Alternative
While not elegant, this approach works when infrastructure constraints exist:
- name: Save commandserver to temp file
copy:
content: "{{ cs.stdout }}"
dest: "/tmp/ansible_commandserver.txt"
mode: 0644
- name: Read in subsequent playbook
set_fact:
commandserver: "{{ lookup('file', '/tmp/ansible_commandserver.txt') }}"
For AWS deployments, consider extending your dynamic inventory script to include the commandserver details:
#!/usr/bin/env python
# inventory_aws.py
import json
import boto3
def main():
ec2 = boto3.client('ec2')
inventory = {'_meta': {'hostvars': {}}}
# Add your existing inventory logic here
# Add commandserver fact globally
inventory['all'] = {'vars': {'commandserver': 'your-command-server.aws.com'}}
print(json.dumps(inventory))
if __name__ == '__main__':
main()
After extensive testing across different Ansible versions (2.9-2.15), I recommend this hybrid approach:
- Use
add_host
for immediate subsequent playbooks - Implement Redis caching for larger deployments
- Combine with inventory variables for static values
Remember that variable persistence strategies should align with your security requirements - sensitive data might need encrypted temporary files or Vault integration.
When orchestrating multi-server deployments with Ansible, we often need to share dynamic values between playbooks. The scenario where a command server's DNS name needs to be propagated to agent nodes is particularly common in cluster setups. The standard set_fact
approach falls short because facts are only available within the current play context.
A clean solution is to store the value as a host variable for a dummy host that all playbooks can access:
- name: Store command server DNS as host variable
add_host:
name: "vars_keeper"
commandserver_dns: "{{ cs.stdout }}"
groups: "vars_keeper_group"
- name: Retrieve in another playbook
debug:
msg: "Command server is {{ hostvars['vars_keeper']['commandserver_dns'] }}"
For more permanent storage across runs, consider using group variables:
- name: Write to group_vars file
copy:
content: "commandserver_dns: {{ cs.stdout }}"
dest: "{{ playbook_dir }}/group_vars/all/commandserver.yml"
mode: 0644
For enterprise deployments, integrating Redis provides a robust solution:
- name: Store in Redis
community.general.redis:
key: "commandserver_dns"
value: "{{ cs.stdout }}"
login_host: redis.example.com
- name: Retrieve from Redis
community.general.redis:
key: "commandserver_dns"
register: redis_result
Each method has different implications:
- Host variables: Fastest for single-run operations
- Group variables: Persists across runs but requires file I/O
- Redis: Adds network latency but scales well
Here's how we might implement this in an AWS cluster deployment:
- name: Discover command server DNS
shell: |
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 describe-instances \
--instance-ids $INSTANCE_ID \
--query 'Reservations[0].Instances[0].PublicDnsName' \
--output text
register: command_dns
- name: Share via host variables
add_host:
name: "cluster_var_store"
cluster_command_node: "{{ command_dns.stdout }}"
- name: Use in agent playbook
debug:
msg: "Connecting to command server at {{ hostvars['cluster_var_store']['cluster_command_node'] }}"
Always include validation when sharing variables:
- name: Validate shared variable
fail:
msg: "Command server DNS not set"
when: hostvars['cluster_var_store']['cluster_command_node'] is not defined