puppet exec vagrant plugin install not working - amazon-web-services

I have successfully installed vagrant-aws on a centos VM, and I am trying to 'puppetize' this task. My relevant puppet code is below:
exec { 'install_aws':
command => '/usr/bin/vagrant plugin install vagrant-aws',
#require => [Exec['install_dependent'], Package['vagrant']],
}
When I provision the machine, it says the Exec[install_aws]/returns: executed successfully, but the plugin is not installed, and I have to run the command manually for it to work. Never seen this behaviour with puppet, can someone help?

exec { 'install_aws':
command => '/usr/bin/sudo /usr/bin/vagrant plugin install vagrant-aws',
require => [Exec['install_dependent'], Package['vagrant']],
}
Fixed the code above. Good point, needed to run the command as superuser. Seems like a silly mistake, thanks for pointing it out ^^.

Instead of using sudo to run that command (as you pointed out in your answer), I would add the user paramater to the exec and run it as root (or any other user with suitable permissions)
exec { 'install_aws':
user => 'root',
command => '/usr/bin/vagrant plugin install vagrant-aws',
require => [Exec['install_dependent'], Package['vagrant']],
}

Related

Unable to execute a step on a running EMR

I have an EMR cluster 5.28.1 running in AWS but I forgot to install from python libraries as part of the bootstrap action. Now that the cluster is running, I was simply attempting to add a step via the EMR console. Here are my settings
JAR: s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
Main class: None
Arguments: s3://xxxx/install_python_libraries.sh
Unfortunately, I get the following error.
Cannot run program "s3://xxxxx/install_python_libraries.sh" (in directory "."): error=2, No such file or directory
I am not sure what I am doing wrong. The shell script looks like this.
#!/bin/bash -xe
# Non-standard and non-Amazon Machine Image Python modules:
sudo pip-3.6 install boto3
sudo pip-3.6 install xmltodict
I also tried this by simply using 'command-runner.jar' but I get the same error. Can you please help me figure out the problem so I do this via the console? I would like to install the libraries on all nodes - master and core.
Thanks
The issue is the xxx.sh files EOL/carriage return type.
In other words, if it is Windows ("\r\n") then it will not work and return the ./ file not found error.
Convert it to unix type ("\n") using something like notepad++ and it will run fine.
(In notepad++ edit>EOL Conversion>Unix(LF) hit save and try again)

Amplify configure

I have installed 'amplify-cli'. When I type 'amplify configure', I get the error message:
'amplify is not recognized as an internal or external command, operable program or batch file'.
Please share your platform. Are you developing on Linux, Windows (Powershell), or Linux on Windows (WSL/Ubuntu)?
Did you install the CLI globally?
Try this:
npm install -g #aws-amplify/cli
And see if that works. If the global install fails, you can try running this per an Amplify developer:
npm install -g #aws-amplify/cli --unsafe-perm=true
Edit: since you're on Windows, it's possible the CLI wasn't added to your $PATH variable. You can fix it by seeing this Github issue.
To solve this, simply edit a PATH key under system Environment Variables and add a new path pointing to amplify:
C:\Users\{UserName}\AppData\Roaming\npm\amplify.cmd
If you have globally installed amplify/cli then you should find two files named amplify and amplify.cmd in the above mentioned npm directory.
Under same circumstances I run all the suggested solutions on Windows 10 machine (64 bit). None of them seemed to do the trick.
I got a more specific error:
..... cannot be loaded because running scripts is disabled on this
system .... + CategoryInfo : SecurityError: (:) [],
PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
The issue appears due to Windows PowerShell execution policies. Eventually, I managed to amend it by applying the following:
C:\Windows\System32>powershell Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
Above solutions didn't work for me, I had to run this instead of 'amplify init':
C:\Users{UserName}\AppData\Roaming\npm\amplify init
I had the same issue and my problem was because I was trying to install it using
yarn global add #aws-amplify/cli
Apparently, it doesn't work when it is installed with yarn it has to be npm. It's funny because there are no errors shown. There might be a fix to it maybe someone can look into that.
If you are on windows platform avoid using the global(-g) flag from your npm command. Install Amplify CLI with below npm command.
npm install #aws-amplify/cli
It worked for me.
Error:
amplify : The term 'amplify' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. le program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
amplify init
CategoryInfo : ObjectNotFound: (amplify:String) [], CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
Try this for windows:
Step 1:
npm install -g #aws-amplify/cli --unsafe-perm=true
Step 2:
npm config get prefix
Step 3:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
you must run this code on PowerShell not a cmd.
I had the same issue
For Windows, try the below command to install Amplify CLI
$ curl -sL https://aws-amplify.github.io/amplify-cli/install-win -o
install.cmd && install.cmd
$ amplify configure
for more info on installation follow the link
https://docs.amplify.aws/cli/start/install/

New ubuntu user gets "mkvirtualenv: command not found" error

I was able to run a test Django application after installing all the required software.
I was able to use mkvirtualenv and create two test apps that worked as required.
I then decided to create another user in Ubuntu. This user does not possess Sudo privileges because I wanted to secure the environment.
With this newly created user I get an error stating "mkvirtualenv: command not found".
My bashrc file has the following commits:-
export WORKON_HOME=$HOME/.virtualenvs
export PATH=/usr/local/bin:$PATH
export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python
export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
source /usr/local/bin/virtualenvwrapper.sh
Running echo $WORKON_HOME results in the following:-
/home/ubuntu/.virtualenvs
I'm not particularly sure what I need to do to have the ability to use mkvirtualenv with the new user.
Any help is appreciated.
Thanks in advance!

Why doesn't my custom recipes run on AWS OpsWorks?

I've created a GitHub repo for my simple custom recipe:
my-cookbook/
|- recipes/
|- appsetup.rb
I've added the repo to Custom Chef Recipes as https://github.com/my-github-user/my-github-repo.git
I've added my-cookbook::appsetup to the Setup "cycle".
I know it's executed, because it fails to load if I mess up the syntax.
This is my appsetup.rb:
node[:deploy].each do |app_name, deploy|
script "install_composer" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code "curl -sS https://getcomposer.org/installer | php && php composer.phar install --no-dev"
end
end
When I log into the instance by SSH with the ubuntu user, composer isn't installed.
I've also tried the following to no avail (A nodejs install):
node[:deploy].each do |app_name, deploy|
execute "installing node" do
command "add-apt-repository --yes ppa:chris-lea/node.js && apt-get update && sudo apt-get install python-software-properties python g++ make nodejs"
end
end
Node doesn't get installed, and there are no errors in the log. The only references to the cookbook in the log just says:
[2014-03-31T13:26:04+00:00] INFO: OpsWorks Custom Run List: ["opsworks_initial_setup", "ssh_host_keys", "ssh_users", "mysql::client", "dependencies", "ebs", "opsworks_ganglia::client", "opsworks_stack_state_sync", "mod_php5_apache2", "my-cookbook::appsetup", "deploy::default", "deploy::php", "test_suite", "opsworks_cleanup"]
...
2014-03-31T13:26:04+00:00] INFO: New Run List expands to ["opsworks_initial_setup", "ssh_host_keys", "ssh_users", "mysql::client", "dependencies", "ebs", "opsworks_ganglia::client", "opsworks_stack_state_sync", "mod_php5_apache2", "my-cookbook::appsetup", "deploy::default", "deploy::php", "test_suite", "opsworks_cleanup"]
...
[2014-03-31T13:26:05+00:00] DEBUG: Loading Recipe my-cookbook::appsetup via include_recipe
[2014-03-31T13:26:05+00:00] DEBUG: Found recipe appsetup in cookbook my-cookbook
Am I missing some critical step somewhere? The recipe is clearly recognized and loaded, but doesn't seem to be executed.
(The following are fictitious names: my-github-user, my-github-repo, my-cookbook)
I know you've abandoned the cookbook but I'm almost 100% sure it's because you don't have a metadata.rb file in the root of your cookbook.
Your cookbook name should not contain a dash. I had the same problem, replacing by '_' solved it for me.
If those commands are failing silently, it could be that your use of && is obscuring a failure.
As for add-apt-repository, that is an interactive command. Try using the "--yes" option to answer yes by default, making it no longer interactive.
If you do not execute your command successfully, you will not find the files in the current directory. Check inside the last release folder to see if it had been put there.
It maybe prudent to check if you got the right directory etc setup by changing the CWD to : /tmp

Starting a Django virtual environment in Grunt

Coming from a JS / Node development background, I like to use Grunt for a lot of my automation. For a recent project I picked up some baby Django, to get a feel for how it operated, but still wanted to integrate Grunt for some of my workflow.
I am currently starting my Django server via Grunt, using the spawn-shell module. This works just fine, but I am also using a virtualenv setup, and would as well like to start that up via Grunt.
The command I am using to start the virtual enviornment is:
source ./venv/bin/activate
Which works just fine from the terminal command line as is. However, executing this command from either grunt shell or grunt exec does nothing. I get no errors from Grunt (it says running, then done without errors), but nothing gets started.
The grunt exec command is as follows:
exec: {
start: {
cmd: function() {
return "source ./venv/bin/activate";
}
}
}
And the shell command is:
shell: {
start: {
command: 'source ./venv/bin/activate',
options: {
stdout: true
}
}
}
Any ideas on how to get this working? Or is it not possible, and I should just resort to entering the command manually at start?
Normally when trying to get other tooling to launch django while using a virtualenv the normal thing is to perform both of the following in one command :
activate the virtualenv
run the command you want.
so this ends up being :
. ${VIRTUALENVHOME}/bin/activate && ${PROJECTROOT}/manage.py runserver 0:8000
This is pretty much how it's done with Fabric, Ansible and any other automation tool.
edit: of course you'd be supplying the values for the variables VIRTUALENVHOME and PROJECTROOT