I am trying to run an Ansible inventory file ansible -i hosts-prod all -u root -m ping and it is failing with this message:
ERROR: The file hosts-prod is marked as executable,
but failed to execute correctly. If this is not supposed
to be an executable script, correct this with
`chmod -x hosts-prod`.
I believe this is because I am using Virtual Box and shared folders which is forcing all my files to ug+rwx. And vbox does not permit changing permissions on shared folders (at least shared folders coming from Windows which is my situation)
Is there a way to allow Ansible to run this file? I can see several options:
Edit hosts-prod to become an executable file. I don't know what's involved in this (being new to Ansible, obviously).
Set a configuration option in Ansible to tell it not to run this file as executable - just treat it as the static configuration file it is. I can't find an option to do this, so I suspect it's not possible.
Move the file outside of shared-folders: not an option in my case.
Your better idea..
All assistance/ideas appreciated!
The actual hosts-prod config file looks as follows, so any tips on making it internally executable would be welcome:
web01 ansible_ssh_host=web01.example.com
db01 ansible_ssh_host=db01.example.com
[webservers]
web01
[dbservers]
db01
[all:vars]
ansible_ssh_user=root
Executable inventories are parsed as JSON instead of ini files, so you can convert it to a script that outputs JSON. On top of that, Ansible passes some arguments to them to a simple 'cat' isn't enough:
#!/bin/bash
cat <<EOF
{
"_meta": {
"hostvars": {
"host1": { "some_var": "value" }
}
},
"hostgroup1": [
"host1",
"host2"
]
...
}
EOF
Not as elegant as a simple 'cat', but should work.
#hkariti's answer is first and closest to the original question. I ended up re-writing the config file entirely into a Ruby script and that is working fine. I thought I'd share that code here since finding complete examples for Ansible dynamic inventory files wasn't super easy for me. The file is different from a static file in how you associate variables with machine listings in the inventory (using the _meta tag)..
#!/usr/bin/env ruby
# this file must be executable (chmod +x) in order for ansible to use it
require 'json'
module LT
module Ansible
def self.staging_inventory
{
local: {
hosts:["127.0.0.1"],
vars: {
ansible_connection: "local"
}
},
common: {
hosts: [],
children: ["web", "db"],
vars: {
ansible_connection: "ssh",
}
},
web: {
hosts: [],
children: ["web_staging"]
},
db: {
hosts: [],
children: ["db_staging"]
},
web_staging: {
hosts: ["webdb01-ci"],
vars: {
# server specific vars here
}
},
db_staging: {
hosts: ["webdb01-ci"]
}
}
end
end
end
# ansible will pass "--list" to this file when run from command line
# testing for --list should let us require this file in code libraries as well
if ARGV.find_index("--list") then
puts LT::Ansible::staging_inventory.to_json
end
I see an answer is accepted already, however let me provide an alternate answer.
The file you are trying to parse as an asible inventory is an executable file, meaning it has an executable permissions on it. If you run ls -l hosts-prod you will see something like -rwxrwxr-x in the output. x means the file is executable.
From your out of the file it looks like it is just a static inventory file so if you remove the executable permission it should work like a static inventory file.
chmod -x hosts-prod
Related
I am working on a docker containerized c++ project that has defined shell scripts to setup the environment (using "source") for compiling and debugging. They have a lot of environment variables and could change at anytime so it would be hard to move them all into the launch.json file (and tedious to keep up with) so I need to call them before compiling or debugging.
The scripts only need to run once so if there was a way to run them after connection to the container that would be the best solution, however I cannot find anything like that.
I have tried to use the "preLaunchTask" in the launcher to run a task before debugging but it seems that the task's shell is different from the debug shell.
Is there anyway to handle this?
For the moment I am using a task to generate a .env file
printenv > ${workspaceFolder}/.preenv && . ${workspaceFolder}/setupEnv &&
printenv > ${workspaceFolder}/.postenv && grep -vFf
${workspaceFolder}/.preenv ${workspaceFolder}/.postenv >
${workspaceFolder}/.env
I have VSCode mount a directory as the container's homedir, and then put a .bash_profile file containing the necessary setups (or other suitable shell startup file) in that directory.
My /path/to/.devcontainer/devcontainer.json includes:
"mounts": [
"source=${localWorkspaceFolder}/.devcontainer/home,target=/home/username,type=bind,consistency=delegated"
],
"remoteUser": "username",
"userEnvProbe": "loginShell", // Ensures .bash_profile fires
Then my /path/to/.devcontainer/home/.bash_profile contains the necessary invocations to set my environment.
My current workflow for a project is the following:
build the project (via catkin)
source a setup.sh script (generated by catkin, which I wouldn't like to modify) setting environment variables and the names needed by my executable.
Run "MyProgram", which is only available after sourcing the "setup.sh" script.
I would like to be able to debug my project in Visual Studio Code. To do this, I have defined a task building the executable via catkin, named "catkin build all", and I have defined a second task as:
{
"type": "shell",
"label": "load programs",
"command": "source /some_folder/setup.sh",
"group": "build",
"dependsOn": ["catkin build all"]
}
Which is the "preLaunchTask" of my lanuch.json launch configuration.
Launching debug will correctly compile the project, but execution fails with error "launch: program myProgram does not exist". Indeed program MyProgram can not be found if setup.sh is not sourced, but is should be sourced by the "preLaunchTask".
In my launch.json i can also set "program" to "/full/path/to/myProgram" instead of "myProgram", but in this case shared libraries are not found, as setup.sh would take care of that.
I have also tried to source setup.sh on a shell and then launch visual studio code from the same shell, but this did not solve the "launch: program myProgram does not exist" problem.
Do tasks run on different shells? How can I have the preLaunchTask running in the same shell as the subsequent program code? Or any other hint on how to get my workflow working?
My solution is to use a env_file
In one terminal, source your file such as: source /opt/ros/melodic/setup.bash
Recover the changes by using: printenv | grep melodic
Create a .env file in your repo with the environment variables; (except PWD)
LD_LIBRARY_PATH=/opt/ros/melodic/lib
ROS_ETC_DIR=/opt/ros/melodic/etc/ros
CMAKE_PREFIX_PATH=/opt/ros/melodic
ROS_ROOT=/opt/ros/melodic/share/ros
PYTHONPATH=/opt/ros/melodic/lib/python2.7/dist-packages
ROS_PACKAGE_PATH=/opt/ros/melodic/share
PATH=/opt/ros/melodic/bin:/home/alexis/.nvm/versions/node/v8.16.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PKG_CONFIG_PATH=/opt/ros/melodic/lib/pkgconfig
ROS_DISTRO=melodic
Add the following line to your launch.json task: "envFile": "${workspaceFolder}/.env"
Note: this could be automated in a prerunTask using:
command: "source /opt/ros/melodic/setup.bash; printenv | grep melodic > ${workspaceFolder}/.env"
Perhaps this might help after a zoom.
Got that info from here
I'm using two files:
1- .bat file which contains in one line:
"C:\Path_to\putty.exe" -ssh servername -l myuserid -pw my password -m "C:\Path_to_Commands_File\Commands.txt" -t
2- Commands.txt which contains either one of these commands:
Command#1:
/Path_to_sasprogram/sasprogram.sas
OR
Commands#2 in the following order:
cd /Path_to_sasprogram/
sas sasprogram.sas
However, the sasprogram.sas does not get executed. So, I used putty to manually execute the commands above. With the commands#2, I get this error: "sas: command not found".
Any suggestions/help would be greatly appreciated! BTW, I tried some solutions already posted, but they didn't work for me. Thanks
It sounds like sas is not on the path of your session.
try
/path_to_sas_executable/sas sasprogram.sas
I have following code under package pack1. Name of file is pack1.go
package pack1
var Pack1Int int = 42
var pack1Float = 3.14
func ReturnStr() string {
return "Hello world!"
}
And following code in main program. Name of file is package_test.go
package main
import (
"fmt"
"./pack1"
)
func main() {
var test1 string
test1 = pack1.ReturnStr()
fmt.Printf("Return string from pack1 : %s\n", test1)
fmt.Printf("Integer from pack1 : %d\n", pack1.Pack1Int)
}
When I try to run it with command go run package_test.go I get following error:
go run: cannot run *_test.go files (package_test.go)
But if I rename file to abc.go then I am getting proper output i.e.
Return string from pack1 : Hello world!
Integer from pack1 : 42
I am curious about what is wrong with using package_test.go as file name.
For code with only main package this name is working fine.
Is this a bug in Go or I am doing something wrong ?
Not a bug, it's designed so. go run detects the _test files and consider them as test files for a package, test files will be compiled as a separate package, and then linked and run with the main test binary.
It's recommended to put your package file to GOPATH/src/PACK_NAME/, then run your *_test.go with go test.
You can't name your program files as *_test.go as this is part of integrated Go testing system
To write a new test suite, create a file whose name ends _test.go that contains the TestXxx functions as described here. Put the file in the same package as the one being tested. The file will be excluded from regular package builds but will be included when the “go test” command is run. For more detail, run “go help test” and “go help testflag”.
Just rename package_test.go to packagetest.go
For BASH
Run go run PATH_TO_FILES/!(*_test).go
NOTE
If you get an event not found error when running this command, you probably need to enable extended globbing in your bash terminal.
Run shopt-s extglob
after you Run go run PATH_TO_FILES/!(*_test).go
For those using ZSH
setopt extendedglob # to get help regarding globbing
next
foo*~*bar* # match everything that starts with foo but doesn't contain bar
So with my case, go run PATH_TO_FILES/*~*_test.go*
if you are using linux:
go run `ls PATH_TO_FILES/*.go | grep -v _test.go`
it workes for me.
I'm running on MacOS X and I'm using Sublime Text 2 to code.
I've found the command + B option to build and command + shift + B to build and run.
Is it possible to run a program (or script) and pass arguments. Exemple:
myProg arg1 arg2
Note: I'm using multiple languages (C++, Java, Python), so I hope there is a way to set the arguments for each project and not for all build.
Edit
I want to set the parameters for a program call, a little bit like in eclipse where you can set the arguments when you run your program.
For each project you can create a .sublime-project file with your specific build_system on it:
{
"folders":
[{
"path": "src"
}],
"build_systems":
[{
"name": "Run with args",
"cmd": ["python", "$file", "some", "args"]
}]
}
This way you don't pollute the global Build System menu and won't have to worry about switching build system as you switch projects. This file is also easy to access when you need to change the arguments:
Cmd-Shift-P > Edit Project
InputArgs does precisely what you're looking for. It shows an input dialog every time you run build(ctrl+b) and you can supply it with space separated arguments from within sublime text.
I found a simple solution is create a python file in the same directory:
import os
os.system("python filename.py some args")