That dreaded MODULE FAILURE
message in Ansible often leaves developers scratching their heads. Here's what's happening under the hood when you encounter this during service restarts:
$ ansible -i /opt/ansible/ec2.py "tag_Function_app:&tag_Application_pro:&tag_Environment_pqa" \
--private-key=~/.ssh/id_root_rsa -m shell --sudo -a "service httpd restart" -u root
First, verify your SSH connection works independently:
$ ssh -i ~/.ssh/id_root_rsa root@10.221.142.0 "service httpd restart"
If this works but Ansible fails, we need to dig deeper into Ansible's execution environment.
Add -vvv
to your Ansible command for detailed logging:
$ ansible -vvv -i /opt/ansible/ec2.py "tag_Function_app:&tag_Application_pro:&tag_Environment_pqa" \
--private-key=~/.ssh/id_root_rsa -m shell --sudo -a "service httpd restart" -u root
1. Python Path Issues: The target host might have Python in a non-standard location
2. Permission Problems: The sudo configuration might be restrictive
3. Module Dependencies: Required Python packages might be missing
Test basic connectivity and Python availability:
$ ansible target_host -m ping
$ ansible target_host -a "which python"
$ ansible target_host -a "python -V"
Create a test playbook with explicit environment settings:
---
- hosts: webservers
gather_facts: no
tasks:
- name: Test raw command execution
raw: echo "Test command"
register: test_output
- debug: var=test_output
- name: Explicit Python path
shell: /usr/bin/python -c "import sys; print(sys.path)"
register: python_path
- debug: var=python_path
Try these nuclear options:
1. Use -m raw
instead of -m shell
for bypassing module system
2. Temporarily enable password-less sudo on target hosts
3. Check /var/log/secure
on target hosts for authentication errors
Nothing's more annoying than seeing that cryptic MODULE FAILURE
message when running Ansible playbooks or ad-hoc commands. Unlike typical command failures that show exit codes and error messages, this one gives zero context about what actually went wrong.
The MODULE FAILURE
typically occurs when Ansible can't properly execute the module on the remote host. Common triggers include:
- Permission issues with the temporary module file Ansible transfers
- Python environment problems on the target host
- Syntax errors in the module arguments
- Network connectivity problems during module transfer
1. Enable Verbose Output
Add -vvv
to see the raw communication between control and remote hosts:
ansible -i inventory.ini all -m shell -a "service httpd restart" -vvv
2. Check the Ansible Temp Directory
SSH into the problematic host and examine:
ls -la /root/.ansible/tmp/
cat /root/.ansible/tmp/ansible-tmp-*/command.py
3. Test Raw SSH Connectivity
Verify basic SSH works with the same credentials:
ssh -i ~/.ssh/id_root_rsa root@target_host "service httpd status"
4. Python Environment Verification
Check if Python interpreter exists and is accessible:
ansible target_host -m raw -a "which python || which python3"
Case 1: Python Path Issues
Explicitly set the Python interpreter in inventory:
[webservers]
web1 ansible_host=10.0.0.1 ansible_python_interpreter=/usr/bin/python3
Case 2: Permission Problems
Try changing the remote tmp directory location:
ansible.cfg:
[defaults]
remote_tmp = /tmp/.ansible-${USER}
Case 3: Module Transfer Failures
Force module transfer in debug mode:
ANSIBLE_DEBUG=1 ansible target -m setup -vvvv
Save this as debug_module_failure.yml
:
- hosts: problematic_hosts
gather_facts: no
tasks:
- name: Test raw command execution
raw: echo "Python test" > /tmp/python_test.txt
- name: Verify Python availability
raw: which python || which python3
register: py_path
- debug: var=py_path
- name: Check temporary directory permissions
raw: ls -ld /root/.ansible/tmp /tmp
Run it with maximum verbosity:
ansible-playbook debug_module_failure.yml -vvvv