Implementing Truly Global Handlers in Ansible for Cross-Playbook Notifications


2 views

When working with complex Ansible infrastructures, you'll quickly encounter the limitation that handlers are normally scoped to:

  • The current playbook file
  • Explicitly imported roles/files
  • Their immediate parent context

This becomes problematic when you want to notify a common handler (like service restarts) from tasks across different playbooks or roles without repetitive declarations.

Here's how I solved this in production environments:

# project_root/
# ├── global_handlers/
# │   └── main.yml
# ├── playbooks/
# │   ├── webserver.yml
# │   └── database.yml
# └── roles/
#     ├── common/
#     └── app_server/

# global_handlers/main.yml
---
- name: restart apache
  ansible.builtin.service:
    name: httpd
    state: restarted
    enabled: yes
    
- name: reload nginx
  ansible.builtin.service:
    name: nginx
    state: reloaded

In your ansible.cfg, configure the handlers path:

[defaults]
handlers_path = ./global_handlers

Now any playbook can notify these handlers:

# playbooks/webserver.yml
- name: Update httpd config
  ansible.builtin.template:
    src: templates/httpd.conf.j2
    dest: /etc/httpd/conf/httpd.conf
  notify: restart apache

# roles/database/tasks/main.yml
- name: Configure DB connection
  ansible.builtin.template:
    src: templates/db.cnf.j2  
    dest: /etc/myapp/db.cnf
  notify: restart apache  # Works across roles!
  • Handler names must be unique across all global handlers
  • Changes to global handlers affect all playbooks - version carefully
  • Consider prefixing handler names (e.g., "global_restart_apache")
  • Test handler idempotence thoroughly

For more control, use an include_tasks pattern:

# In your playbook preamble
- name: Include global handlers
  ansible.builtin.include_tasks: /path/to/global_handlers.yml
  tags: always

When managing complex infrastructures with Ansible, you'll eventually face the need to trigger common operations (like service restarts) from various roles and playbooks. The default handler behavior limits their scope to the current playbook, creating maintenance overhead.

Consider this typical scenario:


# roles/webserver/tasks/main.yml
- name: Update httpd config
  template:
    src: httpd.conf.j2
    dest: /etc/httpd/conf/httpd.conf
  notify: restart httpd

# roles/webserver/handlers/main.yml
- name: restart httpd
  service:
    name: httpd
    state: restarted

This handler only works within the webserver role. If another role modifies httpd-related configurations, it can't reuse this handler without duplication.

1. Using the include_role with handlers

Create a dedicated role for shared handlers:


# playbooks/site.yml
- hosts: all
  handlers:
    - include_role:
        name: common_handlers
        handlers_from: handlers/main.yml

2. Dynamic imports with handler plugins

Leverage community plugins like handler_add:


# ansible.cfg
[defaults]
handler_plugins = ./plugins/handler

# plugins/handler/handler_add.py
from ansible.plugins.handler import HandlerBase

class HandlerModule(HandlerBase):
    def run(self, handler, connection, play_context):
        # Custom global handler logic
        return super().run(handler, connection, play_context)

3. Meta-handler Approach

Create a meta-handler that triggers actual handlers:


# group_vars/all.yml
global_handlers:
  - name: restart_httpd
    actual_handler: "restart httpd"

# tasks/anywhere.yml
- name: Modify SSL config
  template:
    src: ssl.conf.j2
    dest: /etc/httpd/conf.d/ssl.conf
  notify: meta restart_httpd

When implementing global handlers:

  • Handler execution order becomes critical
  • Debugging complexity increases
  • Playbook parsing performance may be affected
  • Idempotency must be carefully maintained

Here's how we might implement a global rolling restart handler:


# common_handlers/handlers/main.yml
- name: rolling_restart_httpd
  command: |
    /usr/bin/systemd-run
    --property=Conflicts=httpd.service
    /usr/bin/systemctl restart httpd
  async: 45
  poll: 0

This handler can then be notified from any configuration change task across all playbooks while maintaining service availability.