Unable to locate credentials aws cli - amazon-web-services

I am using AWS CLI version 2. I am using centos > Nginx > php 7.1, Following command works fine when I directly run on command line.
aws s3 cp files/abc.pdf s3://bucketname/
but when I run same command from index.php file using following code
echo exec("aws s3 cp files/abc.pdf s3://bucketname/ 2>&1");
then it gives error
upload failed: Unable to locate credentials

#Jass Add your credentials in "~/.aws/credentials" or "~/.aws/config" and make it [default] or else use profile_name incase you have multiple accounts.
Also verify, if you are using keys as Environment variables by export, then it will work for that terminal only. So try to execute the php from same terminal where you exported the keys or add it in ~/.aws/credentials.

I tried this and it worked for me and I believe should work for you as well. In your PHP code (index.php), try exporting the credential file location like below
echo exec("export AWS_SHARED_CREDENTIALS_FILE=/<path_to_aws_folder>/.credentials; aws s3 cp files/abc.pdf s3://bucketname/ 2>&1");
When you run from your command-line the AWS CLI picks up the credentials from your home directory i.e. ~/.aws/credentials (this is default). When the index.php is being executed it is looking for the above file in its home directory which appears is not the same as your home directory and hence cannot find the credentials. With the above change you are explicitly pointing it to your AWS credentials.

Related

how to use gsutil rsync. login and download bucket contents to a local directory

I have the following questions.
I got access to a cloud bucket to my email id. Now I want to download the whole bucket folder into a local directory on ubuntu. I installed gsutil from pip.
Is the command correct?
gsutil rsync gs://bucket_name .
the command seems generic how do I give my gmail credentials to it? The file is 1TB of size and I am allowed to download only once so I want to get the command right.
The command is correct if you want your current directory to mirror the contents of the bucket (including deleting any files on the right not found on the left). If you merely want to copy, you might want cp -r instead.
Here are the current docs on how to authenticate when running a standalone gsutil. It looks like you just need to run gsutil config.

passing aws creds to kitchen ec2 command line

I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.

How to copy file from bucket GCS to my local machine

I need copy files from Google Cloud Storage to my local machine:
I try this command o terminal of compute engine:
$sudo gsutil cp -r gs://mirror-bf /var/www/html/mydir
That is my directory on local machine /var/www/html/mydir.
i have that error:
CommandException: Destination URL must name a directory, bucket, or bucket
subdirectory for the multiple source form of the cp command.
Where the mistake?
You must first create the directory /var/www/html/mydir.
Then, you must run the gsutil command on your local machine and not in the Google Cloud Shell. The Cloud Shell runs on a remote machine and can't deal directly with your local directories.
I have had a similar problem and went through the painful process of having to figuring it out too, so I thought I would provide my step by step solution (under Windows, hopefully similar for unix users) with snapshots and hope it helps others:
The first thing (as many others have pointed out on various stackoverflow threads), you have to run a local Console (in admin mode) for this to work (ie. do not use the cloud shell terminal).
Here are the steps:
Assuming you already have Python installed on your machine, you will then need to install the gsutil python package using pip from your console:
pip install gsutil
The Console looks like this:
You will then be able to run the gsutil config from that same console:
gsutil config
As you can see from the snapshot bellow, a .boto file needs to be created. It is needed to make sure you have permissions to access your drive.
Also note that you are now provided an URL, which is needed in order to get the authorization code (prompted in the console).
Open a browser and paste this URL in, then:
Log in to your Google account (ie. account linked to your Google Cloud)
Google ask you to confirm you want to give access to GSUTIL. Click Allow:
You will then be given an authorization code, which you can copy and paste to your console:
Finally you are asked for a project-id:
Get the project ID of interest from your Google Cloud.
In order to find these IDs, click on "My First Project" as circled here below:
Then you will be provided a list of all your projects and their ID.
Paste that ID in you console, hit enter and here you are! You now have created your .boto file. This should be all you need to be able to play with your Cloud storage.
Console output:
Boto config file "C:\Users\xxxx\.boto" created. If you need to use a proxy to access the Internet please see the instructions in that file.
You will then be able to copy your files and folders from the cloud to your PC using the following gsutil Command:
gsutil -m cp -r gs://myCloudFolderOfInterest/ "D:\MyDestinationFolder"
Files from within "myCloudFolderOfInterest" should then get copied to the destination "MyDestinationFolder" (on your local computer).
gsutil -m cp -r gs://bucketname/ "C:\Users\test"
I put a "r" before file path, i.e., r"C:\Users\test" and got the same error. So I removed the "r" and it worked for me.
Check with '.' as ./var
$sudo gsutil cp -r gs://mirror-bf ./var/www/html/mydir
or maybe below problem
gsutil cp does not support copying special file types such as sockets, device files, named pipes, or any other non-standard files intended to represent an operating system resource. You should not run gsutil cp with sources that include such files (for example, recursively copying the root directory on Linux that includes /dev ). If you do, gsutil cp may fail or hang.
Source: https://cloud.google.com/storage/docs/gsutil/commands/cp
the syntax that worked for me downloading to a Mac was
gsutil cp -r gs://bucketname dir Dropbox/directoryname

AWS deployment failed - jhipster app - EBS

I have tried deploying my jhipster application in AWS Elastic BeanStalk by uploading the war directly. When the environment is created, i am getting this error.
[Instance: i-08f7c9efd8b2c5476] Command failed on instance. Return code: 1
Output: (TRUNCATED).../util/SystemPropertyUtils.class Failed to execute
'/usr/bin/unzip -o -d /var/app/staging
/opt/elasticbeanstalk/deploy/appsource/source_bundle' Failed to execute
'/usr/bin/unzip -o -d /var/app/staging
/opt/elasticbeanstalk/deploy/appsource/source_bundle'. Hook
/opt/elasticbeanstalk/hooks/restartappserver/pre/01_configure_application.sh
failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Please suggest me what to do next.
enter image description here
I have also tried using yo jhipster:aws command as per the documentation in the jhipster page.
What i am getting is Missing credentails Config .
my question is i have added credentials.properties file in the given loaction
~/.aws/credentials...
Means .aws/credentials/credentials.properties (file). is the file extension right and the folder structure right,.
Create S3 bucket
Error jhipster:aws
Missing credentials in config
I'm not sure about your first error as you are trying to set the environment up manually and we would need information to reproduce.
In regards to yo jhipster:aws failing, the AWS credentials file should be located at ~/.aws/credentials, not ~/.aws/credentials/credentials.properties
After that create a credentials file at ~/.aws/credentials on Mac/Linux or C:\Users\USERNAME.aws\credentials on Windows.
From the docs: https://jhipster.github.io/aws/
For further clarification, use vi and create a file named "credentials" under the "~/.aws" folder.

How to uninstall Cloud SDK?

First I installed stand-alone gsutil on Fedora 25, it ran nice for months.
Then I installed Cloud SDK, and my Google Cloud credentials have been broken ever since.
I don't need Cloud SDK after all. I just want to use gsutil again.
Is there a way to uninstall Cloud SDK and credentials from Linux?
Or maybe uninstall all Google Cloud products and reinstall the stand-alone gsutil?
To explain the likely reason this is happening:
When you install the Cloud SDK, it takes some steps to make sure that when you type gsutil from the shell, it resolves to the Cloud SDK version (depending on the installation method, it might make some executable scripts in /usr/local/bin/, or put /path/to/cloud/sdk/bin at the front of your PATH environment variable). This Cloud SDK wrapper script for gsutil does some extra auth logic, loading an extra .boto file which contains credentials produced from running gcloud auth login. You can see this extra .boto file when running gcloud version -l:
$ gsutil version -l
[...]
using cloud sdk: True
config path(s): /home/USER/.boto, /home/USER/.config/gcloud/legacy_credentials/USER#gmail.com/.boto
[...]
It's likely that the auth credentials in that extra .boto file are overriding the credentials in your $HOME/.boto file.
How to use standalone gsutil again:
You'll need to ensure that the first gsutil your shell finds is the standalone version. This essentially means that the directory containing the standalone gsutil executable should come before the cloud sdk directory in your PATH environment variable. This can be done via prepending it to your PATH variable, via adding something like this to the end of your .bashrc file:
if [ -d "/path/to/standalone/gsutil/directory" ]; then
PATH="/path/to/standalone/gsutil/directory:$PATH"
fi
After doing this, you can run this command to reload your .bashrc file and check the "using cloud sdk" value of your gsutil info:
$ source "$HOME/.bashrc"; gsutil version -l
If this still shows that you're using the Cloud SDK version of gsutil, you might have an alias defined for gsutil - you can check for this by running:
$ type gsutil
If you still encounter auth issues when using the standalone version of gsutil, you'll need to generate new credentials:
$ gsutil config