Skip to main content

Automate Linux Server Management with Ansible

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Managing 50 servers manually? SSHing into each one to update packages, add users, or change a config file? That's not engineering -- that's suffering. Ansible lets you describe the state you want and applies it across your entire fleet in one command. No agents to install, no master server to maintain -- just SSH and YAML.

Why Ansible for Linux Automation

FeatureAnsibleShell ScriptsPuppet/Chef
Agent requiredNo (SSH only)NoYes
IdempotentYes (built-in)Manual effortYes
Learning curveLow (YAML)Medium (Bash)High (DSL)
Parallel executionYesManualYes
Inventory managementBuilt-inNoneSeparate tool
Error handlingBuilt-inManualBuilt-in

The killer feature: idempotency. Run the same playbook 10 times and the result is the same. Ansible checks current state before making changes.

Installation and Setup

# Install Ansible on your control node (the machine you run Ansible FROM)
# Ubuntu/Debian
sudo apt update && sudo apt install -y ansible

# RHEL/CentOS
sudo dnf install -y ansible-core

# pip (any distro — latest version)
pip3 install ansible

# Verify installation
ansible --version

Inventory — Defining Your Servers

The inventory tells Ansible which servers to manage:

# Create inventory file
mkdir -p ~/ansible && cat > ~/ansible/inventory.ini << 'EOF'
[webservers]
web1 ansible_host=10.0.1.10
web2 ansible_host=10.0.1.11

[dbservers]
db1 ansible_host=10.0.1.20
db2 ansible_host=10.0.1.21

[monitoring]
prometheus ansible_host=10.0.1.30

[production:children]
webservers
dbservers
monitoring

[all:vars]
ansible_user=deploy
ansible_ssh_private_key_file=~/.ssh/deploy_key
ansible_python_interpreter=/usr/bin/python3
EOF

# Test connectivity to all servers
ansible all -i ~/ansible/inventory.ini -m ping

# Test specific group
ansible webservers -i ~/ansible/inventory.ini -m ping

Ad-Hoc Commands — Quick One-Liners

Before writing playbooks, ad-hoc commands let you run quick tasks across servers:

# Check uptime on all servers
ansible all -i inventory.ini -m command -a "uptime"

# Check disk space on webservers
ansible webservers -i inventory.ini -m shell -a "df -h / | tail -1"

# Install a package on all servers
ansible all -i inventory.ini -m apt -a "name=htop state=present" --become

# Restart a service on webservers
ansible webservers -i inventory.ini -m service -a "name=nginx state=restarted" --become

# Copy a file to all servers
ansible all -i inventory.ini -m copy -a "src=./motd.txt dest=/etc/motd" --become

# Check which servers need updates
ansible all -i inventory.ini -m shell -a "apt list --upgradable 2>/dev/null | wc -l" --become

Your First Playbook

Playbooks are YAML files that describe desired server state:

cat > ~/ansible/setup-webserver.yml << 'EOF'
---
- name: Configure web servers
hosts: webservers
become: yes

vars:
app_user: webapp
app_dir: /var/www/myapp
packages:
- nginx
- certbot
- python3-certbot-nginx
- htop
- curl

tasks:
- name: Update apt cache
apt:
update_cache: yes
cache_valid_time: 3600

- name: Install required packages
apt:
name: "{{ packages }}"
state: present

- name: Create application user
user:
name: "{{ app_user }}"
shell: /bin/bash
create_home: yes
groups: www-data
append: yes

- name: Create application directory
file:
path: "{{ app_dir }}"
state: directory
owner: "{{ app_user }}"
group: www-data
mode: '0755'

- name: Deploy nginx config
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/sites-available/myapp
mode: '0644'
notify: Reload nginx

- name: Enable site
file:
src: /etc/nginx/sites-available/myapp
dest: /etc/nginx/sites-enabled/myapp
state: link
notify: Reload nginx

- name: Ensure nginx is running
service:
name: nginx
state: started
enabled: yes

handlers:
- name: Reload nginx
service:
name: nginx
state: reloaded
EOF

# Run the playbook
ansible-playbook -i inventory.ini ~/ansible/setup-webserver.yml

# Dry run (check mode) — see what WOULD change
ansible-playbook -i inventory.ini ~/ansible/setup-webserver.yml --check --diff

Real Playbook: Server Hardening

This is a playbook you'll actually use in production -- it applies security best practices across all servers:

cat > ~/ansible/harden-servers.yml << 'EOF'
---
- name: Harden Linux servers
hosts: all
become: yes

tasks:
- name: Set proper SSH configuration
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
loop:
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^#?X11Forwarding', line: 'X11Forwarding no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 3' }
- { regexp: '^#?ClientAliveInterval', line: 'ClientAliveInterval 300' }
notify: Restart sshd

- name: Install and configure fail2ban
apt:
name: fail2ban
state: present

- name: Enable fail2ban
service:
name: fail2ban
state: started
enabled: yes

- name: Configure UFW defaults
ufw:
direction: "{{ item.direction }}"
policy: "{{ item.policy }}"
loop:
- { direction: incoming, policy: deny }
- { direction: outgoing, policy: allow }

- name: Allow SSH through UFW
ufw:
rule: allow
port: '22'
proto: tcp

- name: Enable UFW
ufw:
state: enabled

- name: Set kernel security parameters
sysctl:
name: "{{ item.key }}"
value: "{{ item.value }}"
sysctl_set: yes
reload: yes
loop:
- { key: 'net.ipv4.conf.all.rp_filter', value: '1' }
- { key: 'net.ipv4.conf.default.accept_source_route', value: '0' }
- { key: 'net.ipv4.icmp_echo_ignore_broadcasts', value: '1' }
- { key: 'kernel.randomize_va_space', value: '2' }

- name: Remove unnecessary packages
apt:
name:
- telnet
- rsh-client
- rsh-server
state: absent

- name: Set password expiry policy
lineinfile:
path: /etc/login.defs
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^PASS_MAX_DAYS', line: 'PASS_MAX_DAYS 90' }
- { regexp: '^PASS_MIN_DAYS', line: 'PASS_MIN_DAYS 7' }
- { regexp: '^PASS_WARN_AGE', line: 'PASS_WARN_AGE 14' }

handlers:
- name: Restart sshd
service:
name: sshd
state: restarted
EOF

# Run hardening on all servers
ansible-playbook -i inventory.ini ~/ansible/harden-servers.yml

Real Playbook: User Management

Managing users across dozens of servers is one of Ansible's most practical use cases:

cat > ~/ansible/manage-users.yml << 'EOF'
---
- name: Manage user accounts across all servers
hosts: all
become: yes

vars:
admin_users:
- name: alice
key: "ssh-ed25519 AAAAC3... alice@company.com"
groups: sudo,adm
- name: bob
key: "ssh-ed25519 AAAAC3... bob@company.com"
groups: sudo,adm

developer_users:
- name: charlie
key: "ssh-ed25519 AAAAC3... charlie@company.com"
groups: developers

removed_users:
- oldemployee1
- contractor2

tasks:
- name: Create developer group
group:
name: developers
state: present

- name: Create admin users
user:
name: "{{ item.name }}"
groups: "{{ item.groups }}"
shell: /bin/bash
create_home: yes
state: present
loop: "{{ admin_users }}"

- name: Add SSH keys for admin users
authorized_key:
user: "{{ item.name }}"
key: "{{ item.key }}"
exclusive: yes
loop: "{{ admin_users }}"

- name: Create developer users (webservers only)
user:
name: "{{ item.name }}"
groups: "{{ item.groups }}"
shell: /bin/bash
state: present
loop: "{{ developer_users }}"
when: "'webservers' in group_names"

- name: Remove former employees
user:
name: "{{ item }}"
state: absent
remove: yes
loop: "{{ removed_users }}"
EOF

ansible-playbook -i inventory.ini ~/ansible/manage-users.yml

Essential Ansible Modules

ModulePurposeExample Use
apt / yumPackage managementInstall, update, remove packages
copyCopy files to serversDeploy config files
templateJinja2 template renderingDynamic config generation
serviceManage systemd servicesStart, stop, restart, enable
user / groupUser managementCreate/remove users
fileFile/directory operationsCreate dirs, set permissions
lineinfileEdit lines in filesModify config parameters
sysctlKernel parametersTune networking/security
ufwFirewall managementConfigure firewall rules
cronCron job managementSchedule tasks
gitGit operationsClone/pull repositories

Ansible Tips for Production

# Run playbook on a single server first (limit)
ansible-playbook -i inventory.ini playbook.yml --limit web1

# Run only specific tasks (tags)
ansible-playbook -i inventory.ini playbook.yml --tags "security,ssh"

# Increase parallelism (default is 5)
ansible-playbook -i inventory.ini playbook.yml --forks 20

# Encrypt sensitive variables
ansible-vault create ~/ansible/secrets.yml
ansible-vault edit ~/ansible/secrets.yml
ansible-playbook -i inventory.ini playbook.yml --ask-vault-pass

# Generate a facts report for all servers
ansible all -i inventory.ini -m setup --tree /tmp/facts/

The pattern is always the same: describe the desired state in YAML, run the playbook, and Ansible figures out what needs to change. Start with ad-hoc commands, graduate to playbooks, then organize into roles as your infrastructure grows.


Now that you can configure servers at scale, what happens when one goes down? Next up: Linux High Availability with Keepalived, HAProxy, and clustering.