AWS folder location installed using anaconda - amazon-web-services

I installed AWS CLI using Anaconda. I am using Linux Mint 19.1. I am not sure where to find .aws folder because I use Anaconda instead of pip install awscli --upgrade --user.
Here are the paths I see using find . -iname "aws":
./anaconda3/share/terminfo/a/aws
./anaconda3/pkgs/ncurses-6.1-hfc679d8_2/share/terminfo/a/aws
./anaconda3/pkgs/tensorflow-base-1.12.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws
./anaconda3/pkgs/tensorflow-base-1.12.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-s3/include/aws
./anaconda3/pkgs/tensorflow-base-1.12.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws
./anaconda3/pkgs/tensorflow-base-1.12.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-core/include/aws
./anaconda3/pkgs/awscli-1.16.92-py36_0/bin/aws
./anaconda3/pkgs/tensorflow-base-1.10.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws
./anaconda3/pkgs/tensorflow-base-1.10.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-s3/include/aws
./anaconda3/pkgs/tensorflow-base-1.10.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws
./anaconda3/pkgs/tensorflow-base-1.10.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-core/include/aws
./anaconda3/lib/python3.6/site-packages/tensorflow/include/external/aws
./anaconda3/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-s3/include/aws
./anaconda3/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws
./anaconda3/lib/python3.6/site-packages/tensorflow/include/external/aws/aws-cpp-sdk-core/include/aws
./anaconda3/bin/aws
I check these 2 paths:
./anaconda3/pkgs/awscli-1.16.92-py36_0/bin/aws
./anaconda3/bin/aws
using find . -iname "config" | grep aws and find . -iname "credentials" | grep aws but both of them don't contain config or credentials file.
So where can I find the .aws folder, which wasn't created? I can confirm that aws is installed as aws --version returns aws-cli/1.16.92 Python/3.6.7 Linux/4.15.0-43-generic botocore/1.12.82

When I was reading about the configuration and credential files here https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html, I tried to find these files but was unable to.
Following #bwest suggestion, I followed the steps in the previous page here https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html using the command aws configure, and did my find again.
This time I'm able to locate the .aws folder, config and credentials files.
So in short, if you install awscli the .aws will not be automatically created, you have to execute aws configure.

Related

aws cli returns python objects instead of regular output

I just installed aws cli on Ubuntu following the oficial installation guide on an azure VM.
When I run any command from the command line the results are python objects and not a text or regular output
$ aws s3 ls
<botocore.awsrequest.AWSRequest object at 0x7f412f3573a0>
I searched everywhere but I cant find any hint.
I already reinstalled aws and also tried using the output flag but nothing changes.
Any suggestions?
This took me a while to figure out as well. For some reason this only affected our CICD jobs, but using the same exact container image and env vars locally worked fine.
Turns out, the issue stems from not providing a region.
You can fix this by specifying the region explicitly in the command:
aws s3 ls --region us-west-2
Or by providing the region with the available AWS env vars:
export AWS_REGION="us-west-2"
# or
export AWS_DEFAULT_REGION="us-west-2"
Some related sources that helped me figure this out:
https://github.com/jwalton/gh-ecr-login/issues/3
aws s3 ls gives error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>
Well, I don't know how I didn't tried this before, but installing the awscli with apt fixed the issue.
sudo apt-get install awscli.

How to copy / transer files from ubuntu terminal into a digital-ocean space

Can I scp files to digital ocean space from ubuntu terminal?
I setup a space in my digital ocean account, I can drag and drop files into the space, but I cant figure it out the way that I can copy files to the space from the command line.
Do I have to use droplets, but its a additional server right? I dont want a server. I just want a space to store some files.
There is a tool called s3cmd which used to upload files and folders to s3 buckets in amazon. You can use this for the same purpose. But for digitalocean you will need an updated version of s3cmd (version 2 or above)
There is a digitalocean documentation about setting up s3cmd you can check it out from here as well
I got s3cmd latest python tarball
wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.0.1/s3cmd-2.0.1.tar.gz
extract it
tar xzf s3cmd-2.0.1.tar.gz
Now install it using below command with source files.
cd s3cmd-2.0.1
sudo python setup.py install
Now you will need to obtain the access token and the secret key from digital ocean api
You can find the procedure here
after getting the token and the key run
s3cmd --configure
provide your region, and the bucket url, etc. then after the configuration
simply run
s3cmd ls to find out the bucket url
then you can upload files by running
s3cmd put file.txt BUCKETURL
if you want to upload directories
s3cmd put -r folderName bucketname

Where can I find the folder which I downloaded from gcloud bucket

By using gcloud shell I have downloaded all my bucket but i couldn't find the downloaded files.
I used the command
gsutil -m cp -R gs://bucket/* .
P.S. Please don't make -1 on that post if I asked something wrong let me know in comments and I will learn how to ask a question correctly and save your time. Thanks
You used the command gsutil cp, as documented here:
https://cloud.google.com/storage/docs/gsutil/commands/cp
The parameters for this command are:
gsutil cp [OPTION]... src_url dst_url
So you used Option gsutil -m for to perform a parallel (multi-threaded/multi-processing) copy.
Then you also added -R to traverse all directories in your bucket
As "destination URL" you entered a "." which specified the current working directory.
So your files should be located in your home directory, or in any directory where you switched to using the cd command inside your command window.
It would download to the directory you were in when you ran the command. If you never changed the directory using $cd ... command, then it should be at the root. On a Mac, that would be Macintosh > Users > YourName.

How to install Google or-tools on AWS Lambda?

I've been successfully using Google's or-tools on AWS EC2 instances but have recently been looking into including them in AWS Lambda functions but can't get it to run.
Function debug.py
Below is just a basic function importing the pywrapcp from ortools which should succeed if everything is set up correctly.
from ortools.constraint_solver import pywrapcp
def handler(event, context):
print(pywrapcp)
if __name__ == '__main__':
handler(None, None)
Failing Module Import
I created a package.sh script that copies all dependencies to the project following Amazon's instructions before creating a zip archive. Running the deployed code results in this:
Unable to import module 'debug': No module named ortools.constraint_solver
Contents of package.sh
#!/bin/bash
DEST_DIR=$(dirname $(realpath -s $0));
echo "Copy all native libraries...";
mkdir -p ./lib && find $directory -type f -name "*.so" | xargs cp -t ./lib;
echo "Create package...";
zip -r dist.zip debug.py lib;
rm -r ./lib;
echo "Add dependencies from $VIRTUAL_ENV to $DEST_DIR/dist.zip";
cd $VIRTUAL_ENV/lib/python2.7/site-packages;
zip -ur $DEST_DIR/dist.zip ./** -x;
When I copy the ortools folder from ortools-4.4.3842-py2.7-linux-x86_64.egg directly into the project root it finds ortools but then fails to import pywrapcp which may be related to a failure loading the native libraries but I'm not sure since the logs don't show much detail.
Unable to import module 'debug': cannot import name pywrapcp
Any ideas?
Following the discussion on Google or-tools I put together a packaging script that works around the issues installing the dependencies in a way that works for AWS Lambda.
They key part of it is that the contents of the egg packages have to be copied manually to the Lambda project folder and been given the correct permission for them the be accessible during runtime.
#!/bin/sh
easy_install3 py3-ortools
find "/opt/python3/lib/python3.6/site-packages" -path "*.egg/*" -not -name "EGG-INFO" -maxdepth 2 -exec cp -r {} ./dist \;
chmod -R 755 ./dist
Instead of creating and configuring an EC2 instance, you can use Docker to create a deployable package locally, see or-tools-lambda for details.
Firstly, the underlying AWS Lambda execution environment is Amazon Linux while or-tools is not tested beyond the below environments as per https://github.com/google/or-tools
Ubuntu 14.04 and 16.04 up (64-bit).
Mac OS X El Capitan with Xcode 7.x (64 bit).
Microsoft Windows with Visual Studio 2013 and 2015 (64-bit)
Test your code by launching an instance with one of the ami's which aws lambda uses in the list here (http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html)
If it works, use pip to install dependencies/libraries at the root level of your project directory and then zip. Do not copy the libraries manually to your project directory

Kubernetes on AWS

When running the following command on kube-master (CoreOS):
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
I get following error:
Can't find aws in PATH, please fix and retry.
I have already set PATH. Can anyopne tell which 'aws' it is searching for? Is it the aws directory in kubernetes repo directory i.e. kubernetes/cluster/aws?
Follow the AWS CLI installation guide and then ensure your PATH is set correctly.
Yes, you are right.
If you set "aws" as KUBERNETES_PROVIDER, Kubernetes will use scripts that reside in kubernetes/cluster/aws. If no KUBERNETES_PROVIDER is set, I believe the default it to rely on gcloud CLI tool.
If you are using Ubuntu OS. run the below command. it will resolve your issue.
apt-get install awscli