A list of puns related to "Ansible"
Hello
There is a problem I want to get some more opinions on that I think is still being evolved the the devops world.
I find that in many organizations, the devops are the one managing the IaC repositories (as stated above - it can be terraform, cloudformation, argo/flux, ansible).
Now, a lot of times developers might wanna make changes - adding/updating native level packages, adding/updating environment variables (dynamically sometimes), sometimes maybe adding new services.
Now, I don't expect for example a Nodejs developer to make a change in a terraform repository and make a PR, or update an Ansible playbook - I don't think it is reasonable to ask them to learn new languages/syntax, learn the repository git workflows and catchas, etc.
However, I also think that developers should be in finer control over the services they own, and updating an env variable should be their choice and in their power.
How do your enable developers to make changes - while not making your team a bottleneck?
I've just written a post on how to install Docker using Ansible that may be interesting to the group:
https://alexhernandez.info/blog/how-to-install-docker-using-ansible/
I'm in the process of learning Ansible for automation and trying establish good practices.
What credential sets should be used for ssh when running playbooks. I.E. for roles and scheduled playbooks should there be a Ansible service account and keypair provisioned on managed devices (which gets rotated when someone is offboarded) For ad-hoc tasks and playbook usage should technicians running those also use the same Ansible service account and keypair, or should they need to specify their own account and or keypair specific to them?
What is the best method for handling the credentials. Should this be done at the inventory or playbook level?
Should nopasswd be enabled for the service account or should password be required for sudo as an added security precaution? (I can see some pros and cons there)
Thank you in advance!
So the issue I currently have is that I have sysprepped a server but Ansible can not ping it using:
ansible all -i hosts -m win_ping
I've worked out a part of the problem is that the initial stage when I generate the self signed certificate for WinRM before the sysprep process, it uses the first hostname.
As soon as the server reboots and the sysprep process is complete, the hostname changes therefore a new certificate needs to be generated with the new hostname using the ConfigureRemotingforAnsible.ps1 script. I do this using the switch:
powershell.exe .\ConfigureRemotingforAnsible.ps1 -ForceNewSSLCert
After this I attempt to ping the server again but it seems I get the following error:
SBS | UNREACHABLE! => {
"changed": false,
"msg": "basic: HTTPSConnectionPool(host='192.168.1.25', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f359e538310>, 'Connection to 192.168.1.25 timed out. (connect timeout=30)'))",
"unreachable": true
}
I've checked the Windows Firewall and can confirm the correct inbound rules are in place for HTTP/HTTPS are in place.
What else could I be missing (I'm sure its the sysprep process that's having an effect on it but I can't work out what else its changed)
Any pointers would be appreciated!
EDIT: I've now resolved the issue with the help of everyone. For reference the resolution was that the AWS environment the instances were hosted in, the firewall needed an inbound WinRM-HTTPS (5986) rule specified (obviously lock the source down to the CIDR range where your Ansible control node is located).
Thanks for the help from everyone!!
hi all!
I am currently running manjaro but I would like to go back to arch and take back the system configuration to exactly how I want it. I am very keen on ansible for all my automation so I found the following repo which installs arch by opening up sshd from the iso and fire commands from ansible to the arch installer. (since the last time I checked encryption was added to the repo)
I adapted this to my own needs to add btrfs support and encryption support with Luks (luks1 for grub support).
Next up is adding the option to add the install next to other OS'es instead of a full wipe and an overhaul of my desktop installation as I would like to run as much as possible of my software in wayland and possibly with sway.
Feel free to go check it out and if you have feedback let me know!
https://github.com/33Fraise33/desktop-ansible
p.s. keep in mind that currently your full drive would be wiped if you run this on a physical pc, I advice to test on a VM (like I do).
Looking to move into a new role where I'll be taking over a team responsible for managing a large number of Windows servers.
I'm coming from a total RHEL and Openshift shop, so Ansible playbooks were the norm.
Curious if Ansible on Windows server is in a healthy place and if a lot of shops are using it. Any advice and / or tips on running Ansible on Windows server appreciated.
Hey everyone,
I'm just trying to understand Ansible a little bit more and trying to familiarize with conditions.
I have a little problem, where I'd like to use an ansible fact based on another value.
"ansible_interfaces": [
{
"connection_name": "Ethernet1",
"default_gateway": null,
"dns_domain": null,
"interface_index": 25,
"interface_name": "Intel(R) 82574L Gigabit Network Connection",
"macaddress": "00:50:56:YY:YY:YY"
},
{
"connection_name": "Ethernet0",
"default_gateway": "192.168.1.1",
"dns_domain": null,
"interface_index": 11,
"interface_name": "vmxnet3 Ethernet Adapter",
"macaddress": "00:50:56:XX:XX:XX"
}
],
I'd like to output the connection_name where the default gateway is 192.168.1.1. With this fact I'd like to shoot a Powershell command with win_shell to disable IPv6 on this connection_name.
In my head it looks like this.
If default gw = 192.168.1.1 give me the connection name of this interface.
Hopefully this is understandable and someone could help me out there.
Thanks
Hi folks! As mentioned in an earlier post, We've slimmed down the Ansible module docs tables so they work better in smaller screen sizes etc. We also added links where one module mentions another module. This is only on devel docs today so we have some time to solicit feedback before it goes live on the main docsite. Feedback always welcome! Here's a couple of examples that show these new tables in action:
https://docs.ansible.com/ansible/devel/collections/community/crypto/acme_certificate_module.html#parameters and https://docs.ansible.com/ansible/devel/collections/community/hashi_vault/hashi_vault_lookup.html#parameters.
I feel like I am missing something crucial; allow me to explain
I am writing roles for various things, which is straight forward. But then I write a one-off playbook to deploy said role. For instance:
I write a role to deploy appA. I write a playbook to deploy appA role to necessary servers
I write a role to deploy appB. I write another playbook to deploy appB to necessary servers
Now I have various playbooks to deploy various roles and I am pretty sure this is not the way to do it. Ideally I'd have a master playbook (is there such a thing?) and deploy whatever role on whatever server, and each server will have a collection of roles it needs.
Hello, new to ansible. I want to get some pointers on how can I run my ansible playbooks for deployment after I resolved a pull request for the master branch. Do I need a separate machine setup with ansible on it? I cant find much info on this, the only relevant article I've checked dealt with docker ( which Im not using at the moment ).
Edit:
Thanks for replies. I have setup a github action to trigger on a master pull request. So far it seems to work, and triggers a simple playbook.
I have added the SSH key to the repo-> secrets, and inject it into the workflow.
A question I have is how to handle sensitive data/credentials, which are needed for ansible playbooks ( for example I have docker login credentials, in order to log in to docker hub and pull the image )? On my local machine I use ansible vault, but how do I handle this with github actions?
Thank you for your help and insights!
P.S. any tips would also be very appreciated, thank you!
AWS has been adding lots of features to SSM. You can- run multiple SSM documents across multiple platforms (windows, linux...which SSM can auto-detect)
- share SSM documents across AWS accounts
- see patch state of various packages, and logs in the AWS console without extra configuration
- compose together multiple SSM documents in YAML files
One of the big benefits of Ansible is that it's agentless, but what if you already have the SSM agent on all your instances anyway? Are there any important features Ansible has for solving complex scenarios, that SSM doesn't have at this point?
I don't have much experience with Ansible myself -- I've been able to fulfill any requirements with SSM so far. But for Ansible users out there who have had to use SSM recently -- were there any disappointments -- where you were like, dang, I could've done this in Ansible, but it's super difficult, error-prone, or impossible in SSM? I'm asking, because at my organization, I'm evaluating whether to go all in on SSM, or whether to provide the option of Ansible...
Edit 1:**Please assume in this scenario that portability is not a concern, that the organization will never need to leave AWS.
Edit 2: **I'm looking primarily for technical limitations.
Spend 15 minutes a day to improve your Ansible Skills. Hi everyone, Hello from Thetips4you. I am a small YouTuber and have a YouTube channel called Thetips4you where I publish tutorial on Ansible for Beginners weekly. My goal is to share the knowledge on the new technologies with others. Have you ever searched for a easy to understand Ansible guide? Here it is 3 hrs 30 mins Video tutorial series completely free.: https://youtube.com/playlist?list=PLVx1qovxj-al0Knm1A0eEXfGyd5kCi16p
I had put a lot of effort in creating this video series on Ansible for beginners. It consist of basics on ansible starting from setting up ansible, the basics , variables and facts, ad hoc commands, moving in to creating ansible playbooks, real use cases, deploying docker containers using ansible, usage of handlers, and finally how to convert your playbook in to roles.
I am sure this will help you to enhance your skills. I would appreciate a look on it & share your feedback. :)
Hello,
I'm setting up a new Homelab environment and am trying to align to good DevOps practices by managing all the configuration through Ansible. However, I've now hit a stumbling block with modifying a configuration file that's in JSON format.
Since the config file contains some sensitive parameters, I can't manage the entire file in the git repo holding the playbooks. But I have no idea how to modify the config file on the remote host since JSON is sensitive to indentation and does not support comments (which would rule out blockinfile
I think?).
Here is the original config file that I'm trying to modify (with some values redacted):
{
"root": "/home/step/certs/root_ca.crt",
"federatedRoots": null,
"crt": "/home/step/certs/intermediate_ca.crt",
"key": "/home/step/secrets/intermediate_ca_key",
"address": ":9000",
"insecureAddress": "",
"dnsNames": [
"localhost",
"ca.domain.tld"
],
"logger": {
"format": "text"
},
"db": {
"type": "badgerv2",
"dataSource": "/home/step/db",
"badgerFileLoadingMode": ""
},
"authority": {
"provisioners": [
{
"type": "JWK",
"name": "admin",
"key": {
"use": "sig",
"kty": "EC",
"kid": "hunter",
"crv": "P-256",
"alg": "ES256",
"x": "change",
"y": "me"
},
"encryptedKey": "foo"
}
],
"template": {},
"backdate": "1m0s"
},
"tls": {
"cipherSuites": [
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
],
"minVersion": 1.2,
"maxVersion": 1.3,
"renegotiation": false
}
}
I'm trying to add the following block of config after the key encryptedKey
:
{
"type": "ACME",
"name": "mgmt",
"forceCN": true,
"claims": {
"minTLSCertDuration": "24h",
"maxTLSCertDuration": "43800h",
"defaultTLSCertDuration": "43800h"
},
"options": {
"x509": {
"templateFile": "templates/certs/x509/leaf.tpl"
}
}
}
... keep reading on reddit β‘Hey guys,
Today I decided to spend some time writing an article on my abandoned website. Some of you (more like a lot of you probably) are using Ansible at work, so you might find it useful.
In short, the article is about using Molecule for testing your Ansible roles.
Here's the link to the article: https://rdbreak.com/?p=164
Hopefully it'll help someone. Cheers and thanks!
Hi guys, I'm writing a role that adds a list of users with information like their public key, username, groups, etc from a JSON file to a server, but some of the user keys have /
followed by a character in it, which sometimes JSON treat it as an unknown escape character. I really want to automate this, I could write a script but you know have to ssh in and run the script. Any help would be great!
Hi everyone, I have recently published one of my Ansible roles that I use to keep track of my Home Assistant configuration files: https://github.com/bellackn/ansible-role-hass-control
I have also written a blog post that briefly describes how to use it: https://www.bumbar.blog/tech/ansible-home-assistant-configs/
Hope somebody finds it useful. :)
Hi Ansible community,
Need your expertise on this. I would like to know concerning about disabling the ansible tower job isolation security bubble .
I am aware that there is a security bubble integration with Ansible Tower and CyberArk using Central credential provider (CCP) approach .
However, with the credential provider (CP) approach, this security bubble integration will be bypassed Refer to this documentation: https://docs.ansible.com/ansible-tower/3.2.2/html/administration/proot_func_variables.html
A few questions in mind:
1: Does it mean that by using the CP approach, the Ansible Tower and Cyberark CP will not be encrypted and easily bypassed?
What is the required compatible security setting between Ansible Tower and Cyberark CP integration?
In regards to disabling job isolation bubblewrap on ansible tower - what are the mitigating controls during a tower compromised ?
As my Ansible Tower does not have entitlement for Ansible Automation Premium/Standard support . That is why I am reaching out to this community.
Been having a lot of trouble getting this work. It seems as maybe its pointing to the path in which I have tried the download url and local file paths. Tried removing the var and placed it directly but no go. Even stated it was an msi but nothing works. Am I missing something?
- name: Install 3cx desktop app
ansible.windows.win_package:
path: '{{voip_url}}'
state: present
And the error I get
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: at <ScriptBlock>, <No file>: line 1380
fatal: [192.168.10.240]: FAILED! => {"changed": false, "msg": "Unhandled exception while executing module: The term 'Get-AnsibleWindowsWebRequestSpec' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again."}
I would like to know if anyone here could provide me with some direction and/or guide to get ansible setup with either Rundeck or AWX, preferably with Rundeck... heck I'm open to any suggestions. I've been able to setup my server, install ansible, create an inventory file as well as a playbook and have that properly make configuration changes on the Cisco device in which I wanted to configure but I've not had any luck in getting this to work in Rundeck. What I'm looking to achieve is to be able to create jobs within Rundeck and be able to have my technicians login and run them to make configuration changes on our Cisco devices most specifically VLAN changes, in addition to this I'd like to use it to streamline configurations across like devices on our network. I'm a noob to all of this so anyone who could point me in the right direction or get me going I'd be much appreciated of your correspondence.
I tried to install it in Dockerfile
FROM python:3.7-slim-stretch
RUN apt-get update && apt-get install -y \
ssh \
sshpass \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /tmp
COPY requirements.txt /tmp
RUN pip install -r requirements.txt && rm requirements.txt
RUN groupadd --gid 1001 ansible && useradd --uid 1001 --gid 1001 -m ansible
COPY ansible.cfg /home/ansible/.ansible.cfg
RUN mkdir -p /home/ansible/.ansible/roles/ && chown -R ansible:ansible /home/ansible/
COPY requirements.yml /home/ansible/requirements.yml
RUN ansible-galaxy install -r /home/ansible/requirements.yml
WORKDIR /playbook
USER ansible
requirements.txt
ansible==3.2.0
requirements.yml
- src: geerlingguy.php
I start docker with docker-compose. The config file is
version: "3.8"
services:
ansible:
build:
context: ./ansible
dockerfile: Dockerfile
image: ansible
container_name: ansible
hostname: ansible
#command: ['/bin/bash', '-c', 'ansible-galaxy install -r ./playbook/requirements.yml']
tty: true
volumes:
- ./playbook:/playbook
- ./roles:/home/ansible/.ansible/roles/
This way I can see ansible-galaxy installed roles I defined in the requirements.yml
file.
But when I login into the container ansible
, I can't find the roles at all.
If I remove the two lines from Dockerfile, add command
way in the docker-compose.yml file, after build, I can't login into the container. There was an error
Error response from daemon: Container 11111111111111111111111111111111111111111111111111111111111 is not running
I'm confused why they weren't been installed in the image if set the install action in Dockerfile?
Hello,
I'm learning Terraform, and I already know Ansible.
I'm wondering what the best practice do you recommend if you want to do provisioning with Terraform and configuration with Ansible ?
Launching Terraform with Ansible or lauching Ansible with Terraform ?
And why !
Hello everybody, I am very new to Juniper and Ansible,
I want to create the lag interface in my switch through Ansible but when I use module "junipernetworks.junos.junos_l2_interfaces: " for delete unit 0 for lag members, it doesn't work.
This is my config in my switch for my interfaces:
ge-0/0/X {
disable;
unit 0 {
family ethernet-switching {
storm-control default;
}
}
}
I understood that first I must delete unit 0 under members then create the lag interface and assign members to it.
I used this config in my Ansible playbook:
- name: "Delete L2
junipernetworks.junos.junos_l2_interfaces:
config:
- name: ge-0/0/X
state: deleted
But because the module doesn't work, I don't know how I can do that.
I wrote a set file that includes "delete interface ge-0/0/X unit 0 "and all of the members and push it to the devices and enabled my interfaces, with another task and run my jinja2 template to create lags and push to the devices.
this is my set file.
"delete interfaces ge-0/0/x unit 0"
and this is my playbook:
- name: delete layer 2 from interfaces
juniper_junos_config:
src: "config/interface.set"
load: set
comment: "playbook laginterface"
- name: enable interfaces
junipernetworks.junos.junos_interfaces:
config:
- name: "{{ item.name }}"
enabled: "{{ item.enabled }}"
state: replaced
loop: "{{ interface_enable }}"
then I use another task to create lag interfaces with jinja2
it works but I know it is not the correct way.
can you help me with how I can fix it without a set file?
Hey there,
I'm currently testing netbox so my plan is to fill the data with ansible and the modules are working fine.
My current problem is that I want to add all interfaces of a host to the netbox inventory but the hostvars fact structure is... annoying.
In the facts all interfaces have their own key like "ansible_eth0" and an additional key "ansible_interfaces" with all interfaces as value.
This is an not working example only to show what I want to do
- hosts: localhost
tasks:
- name: Attach a new Interface
netbox_ip_address:
netbox_url: http://foo
netbox_token: bar
data:
address: '{{ interface.ipv4.address }}'
assigned_object:
name: "{{ interface.device }}"
device: "{{ interface.device }}"
state: new
I've played around with many different loops but I cant get it running. A jinja template and a for loop would be really easy but I dont have a clue how to do this in a play.
my loops are based on this
with_items: "{{ groups['servers'] | map('extract', hostvars) }}"
i kind of getting good results with with_subelements but I couldnt get it running and the loop executed to many items
loop: "{{ lookup('subelements', groups['servers'] | map('extract', hostvars), 'ansible_interfaces', { 'skip_missing': true }) }}"
Maybe someone has a good idea for this...
I've been using ansible to provision everything including a large server running lots of services. It's working pretty well but I like the idea of separating it into different VM's.
What are my options for doing this? I'm not sure if docker is the right solution for me since it limits to 1 command running per container unlike a VM.
And I don't need the caching capabilities of docker. Anytime I run ansible it should change whatever on the server has changed from the ansible script, like it does when you run ansible on a real server.
And there's no simple way to point an ansible playbook at a container, like you can point it to a server or VM through SSH. I'd have to copy my entire ansible playbook to the docker directory and then run the ansible command within the Dockerfile.
The system I use now where I'm provisioning the large server using lots of ansible roles works well. I just wanted to see if there's any simple way to reuse my existing ansible roles (and not have to change them much) to provision isolated VM's or containers which could bring some benefits.
Hello all,
I work for a company with some very senior Chef and Puppet consultants. One of the biggest thing they dog on Ansible for is it's scalability. We service very large customers who could have thousands of servers; and the complaint from our Puppet and Chef guys is that because Ansible uses SSH as a push it doesn't scale well. They state that it chokes out after a certain point. Is this true?
Also is there a way to have Ansible check the configuration at a time interval like Puppet or Chef?
Kind regards
Hey everyone!
I decided to spend some time today and write an article on my crappy website. The topic I wanted to write on was Ansible + Molecule so here it is: https://rdbreak.com/?p=164
I hope some of you find it interesting and maybe it even helps you waste less time when developing Ansible roles!
Thank you and have a nice day.
Hello,
I recently setup roles to create users in two databases, either MariaDB or PostgreSQL, and I used the relevant community.postgresql.postgresql_user
and community.mysql.mysql_user
.
For the user password, I set a per-host variable, a string encrypted with ansible-vault. I'm saving the string (encrypted), not the hash, because it's also inserted in some other template.
The problem is that I'm not sure that ansible actually puts the right password. Example snippet:
- name: "Create MySQL database user for Authelia"
become: false
community.mysql.mysql_user:
name: authelia
password: authelia_password
state: present
priv: "authelia.*:ALL"
login_unix_socket: /run/mysql/mysql.sock
login_user: root
when: authelia_db_type == 'mysql'
authelia_password
is the password as-is encrypted with ansible-vault encrypt_string
I ran it against a database where the user was already there, and ansible actually reported a changed
status (it shouldn't have).
On a first test of the database, the connection did not work, despite the password being correct (checked also with ansible MYHOST -m debug -a var="authelia_password" -i hosts --vault-id VAULT_ID
). Authentication to the database (also seen with PostgreSQL, same type of task) failed, and only setting the password (identical to what was in the vault) directly in the database solved the problem.
I'm guessing that ansible is giving the database something else rather than the actual password. How can I find what's going on? I would exclude newlines, etc, because as I said, decrypting the password shows the exact value.
ansible --version
ansible [core 2.11.6]
Hi Everyone!
After a few years working with Ansible and Jenkins as our main CI/CD application, we want to change our CI/CD application to GitLab CI. Our objective is use this migration create reusable Ansible roles using different Gitlab projects, and then add them has requirements on our playbooks. This is possible with GitLab CI or we need to use Ansible-galaxy for that? If it is possible, how can we implement it?
Thanks for your support!
Regards,
I hope this post fits here. If not, please let me know...
I am exploring the potential integration/combination of the three products in the title to provide me the most comprehensive solution for infrastructure automation OR IaC. Note, I am only considering the free version of Terraform and Ansible which is why the VRA is in the picture to provide certain enterprise features, such as access control and audit...
According to VMware, VRA has the proper integration with Terraform and Ansible already. So my plan sounds possible but I lack the experience of making them as a combo...So have you done so? If so, what would be the work flow look like?
In my mind, the high-level work flow should be:
Does this sound viable OR I totally miss the points...? Any other real-life suggestions?
Lastly which tool of the three could potentially be used to run custom scripts to retrieve infrastructure or network fabric running status? Assuming the VRA/embedded VRO?
I'm attempting to connect a windows virtual machine in AWX. I'm able to manually ssh into the VM but when I run a win_ping in awx I receive this error:
"changed": false,"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"\
echo ~/.ansible/tmp \
"&& mkdir "\ echo ~/.ansible/tmp/ansible-tmp-1641411445.9786844-205-181280620554310 \
So far I have tried changing remote_tmp
to = /tmp/ansible
(as well as /tmp
) in ansible.cfg.
Hey Reddit,
I have Ansible up and running on WSL2 on Ubuntu (20.04). I am running into issues on the WinRM side and Ansible can't connect saying it's unreachable. I am trying to use CredSSP as a authentication. Does anyone know any good reads on general authentication methods, Windows WinRM configuration, and Ansible working with Windows? Thank you.
Hi, I have a weird question about the term "ansible," which is used by a number of science fiction authors to describe a device for interstellar communication. I first became aware of it reading Ursula Le Guin's books, but when I read The Long Way to a Small, Angry Planet by Becky Chambers recently, the term was used there as well. The Wikipedia page says that Le Guin coined the term but many authors have gone on to use it since then, from Orson Scott Card to Doctor Who writers. I was just curious if anyone knew how this happened - of course Le Guin wouldn't have been the first person to come up with an interstellar communication device, but how did multiple other authors come to use her unique name for it?
I have a problem with ansible so let me explain it in more detail. I have an inventory file inv.yml with a list of some hosts which looks like this:
--- all:
hosts:
csr1:
ansible_host: 192.168.10.11
os: ios
csr2: etc...
.
I have a folder "group_vars" and I have a few files there one is all.yml and it works and all hosts are getting variables from this file but I also have a file amers.yml and emear.yml and each one of them has a specific snmp config per region which looks like that:
--- snmp:
contact: Joe Smith
location: Americas
communities:
- community: public
type: ro
I believe that's my problem is I cannot correctly reference when each host should use emear or amers group variable file. Can you please point me how I can correctly reference amers and emear variable files for specific devices in inventory?
I'm giving a talk on Ansible and technical automation in general next week, and I wanted to reach out to the community to get some novel and unique use cases for ansible (or any other similar automation tool!). We cover the obvious really valuable use cases (patching, config management, provisioning, etc) really thoroughly in plenty of places, so I'm looking for places that the community and administrators use this sort of tooling in their every day lives, for work or personal use, that might not be as popular. Nothing is a bad example!
With the new REST API available since RouterOS v7,
did anybody (re-)consider writing an proper Ansible module?
Given the popularity of MikroTik devices and the excellence of the company,
I am actually surprised that there isn't such an implementation already out there.
Cheers,
Raoul
Hello All,
Anyone using Ansible for Macos, Linux and Windows Management.
If yes, then how are you managing the Windows GPOs like SRP policies & Password Policies and Linux package management.
I have see the Winrm module and win dsc but. looking for more advise on it.
I have a very strong Ansible and Linux background. I think k8s is wonderful but for a lot of use cases I cannot justify using Terraform and increasing the complexity of the environment I manage. Hopefully somebody can point out my flaw. I know the theory that TF is infra provisioning and Ansible is CM but practically speaking today Ansible seems to always have the solution to the problem as elegantly as can be expected.
Somebody convince me what Ansible is lacking that would required me to use Terraform.
I didn't find the ansible.log on the server. Isn't it enable log by default?
I found if set this in the ansible.cfg
, it's true it will save log file on the system
log_pth=/var/log/ansible.log
Second, is it possible to save execute state logging on the target server? Such as Terraform's state file to save what ansible had did on the target server. Or if ansible can save these state on its server, it's better.
I found this setting in the ansible.cfg
:
no_target_syslog = False
I have 2 groups created in hosts file as given below. I have a playbook consisting of 2 plays - the 1st play will be executed on IP in 'ACTIVEFIREWALL' group and 2nd play will be executed on IP in 'STANDBYFIREWALL' group.
Many a times, i have no ip to be give for 'STANDBYFIREWALL' group (so the ip portion remains blank but everything else mentioned below remains on hosts file). So the 2nd play just stops and does nothing until timeout. However when i delete the entire 'STANDBYFIREWALL' group from inventory file, Ansible ignores the 2nd play containing 'STANDBYFIREWALL' group and moves on.
Is there any ways i can make Ansible to move on without deleting STANDBYFIREWALL group from hosts file when there is no IP to give?
[ACTIVEFIREWALL]
FW1 ansible_host=10.224.240.241
[ACTIVEFIREWALL:vars]
ansible_user=username
ansible_ssh_pass=password
[STANDBYFIREWALL]
FW2 ansible_host=10.224.240.244
[STANDBYFIREWALL:vars]
ansible_user=username
ansible_ssh_pass=password
On a new project one of the non negotiable requirement is that _nobody_ can have access to the VMs I have to create. No public IP or jump server.
I have full control over the infra from a terraform standpoint and my current solution is to use cloud-init/user data and hope that my scripts run well locally (making debugging extremely hard).
That also means that if anything goes wrong I need to recreate the machine which is a problem because IAM configuration is not managed by me and would be lost.
Is there any alternative besides installing ansible and the playbook on the VM and then running it from terraform in order to be able to do configuration management in that scenario ?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.