salt cloud error deploying to AWS - amazon-web-services

When I try to deploy to amazon EC2 using salt cloud, I’m getting this error:
[root#salt salt]# salt-cloud -p ec2_private_win_r3.xlarge server00009
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '33b43015-518e-4865-88e7-b6432e61b0db'}
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '4b88b080-ad32-4388-a133-4322b1c08c04'}
[ERROR ] There was a profile error: 'NoneType' object has no attribute 'copy'
I’ve verified the AWS keys that I’m using and I’m able to list and even launch new instances using the aws command line with the keys that I’m using in the cloud provider file:
## Gov Cloud Non Prod environment
company-govcloud-nonprod-us-east-1:
# Set up the location of the salt master
minion:
master: 10.0.2.15
# Set up grains information, which will be common for all nodes
# using this driver
grains:
node_type: broker
# Valid options are:
# private_ips - The salt-cloud command is run inside the EC2
# public_ips - The salt-cloud command is run outside of EC2
#
ssh_interface: private_ips
# Optionally configure the Windows credential validation number of
# t-tdetries and delay between retries. This defaults to 10 retries
# with a one second delay betdwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
id: 'AKIAIATLQ4FTDDA6BV7A'
key: 'asdfasdsfadsadasasdafadsadfafasdasda’
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/company-timd
keyname: company-timd
#securitygroup: core-sg-default
# Optionally configure default region
# Use salt-cloud --list-locations <driver> to obtain valid regions
#
location: us-east-1
availability_zone: us-east-1c
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: root
# Optionally add an IAM profile
#iam_profile: 'arn:aws:iam::xxxxxxxxxxxx:role/rl-company-admin'
driver: ec2
And this is the profile that I’m trying to use:
## Windows Server 2012 Alteryx & Tableau
ec2_private_win_r3.xlarge:
provider: company-govcloud-nonprod-us-east-1
image: ami-xxxxxxx
size: r3.xlarge
network_interfaces:
- DeviceIndex: 0
SubnetId: subnet-xxxxxxx
SecurityGroupId: sg-xxxxxx
PrivateIpAddresses:
- Primary: True
AssociatePublicIpAddress: False
block_device_mappings:
- DeviceName: /dev/sda1
Ebs.VolumeSize: 120
Ebs.VolumeType: gp2
- DeviceName: /dev/sdf
Ebs.VolumeSize: 250
Ebs.VolumeType: gp2
tag: {'Engagement': '999999999999', 'Owner': 'Tim', 'Name': 'non-production', 'Environment': 'COMPANY-Grouper'}
I tried commenting out the IAM profile in the cloud provider definition. I’ve checked and the AWS credentials I’m using has administrator access in IAM.
Here's my version report
[root#salt ~]# salt-cloud --versions-report
Salt Version:
Salt: 2016.11.5
Dependency Versions:
Apache Libcloud: 0.20.1
cffi: 1.6.0
cherrypy: 3.2.2
dateutil: 2.6.0
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.8
mysql-python: Not Installed
pycparser: 2.14
pycrypto: 2.6.1
pycryptodome: 3.4.3
pygit2: Not Installed
Python: 2.7.5 (default, Nov 6 2016, 00:28:07)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: centos 7.2.1511 Core
machine: x86_64
release: 3.10.0-327.el7.x86_64
system: Linux
version: CentOS Linux 7.2.1511 Core
How can I solve this problem?

Are you trying to launch a windows EC2 instance with a ssh_username? That may be breaking it.

Related

Google Cloud: ERROR: Reachability Check failed

I followed this answer already. But it didn't help, also, I re-installed gcloud CLI, but now I am not able to install CLI anymore because of the following error.
Here is my output for ./google-cloud-sdk/bin/gcloud init
ERROR: Reachability Check failed.
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with httplib2 (SSLCertVerificationError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with httplib2 (SSLCertVerificationError)
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with requests (SSLError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with requests (SSLError)
Network connection problems may be due to proxy or firewall settings.
Also, I am not behind any corporate proxy.
It was working perfectly few days ago, until today.I did not changed any settings whatsoever, I didn't install any new services whatsoever.
Output for ./google-cloud-sdk/bin/gcloud info.
./google-cloud-sdk/bin/gcloud info
Google Cloud SDK [354.0.0]
Python Version: [3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08) [Clang 6.0 (clang-600.0.57)]]
Python Location: [/Users/myname/.config/gcloud/virtenv/bin/python3]
Site Packages: [Enabled]
Installation Root: [/Users/myname/Downloads/google-cloud-sdk]
Installed Components:
gsutil: [4.67]
core: [2021.08.20]
bq: [2.0.71]
System PATH: [/Users/myname/.config/gcloud/virtenv/bin:/Users/myname/Downloads/apache-maven-3.8.4/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/munki:/usr/local/opt/go/libexec/bin:/Users/myname/go/bin]
Python PATH: [/Users/myname/Downloads/./google-cloud-sdk/lib/third_party:/Users/myname/Downloads/google-cloud-sdk/lib:/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload:/Users/myname/.config/gcloud/virtenv/lib/python3.7/site-packages]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/myname/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/myname/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/myname/.config/gcloud/configurations/config_default]
Account: [None]
Project: [None]
Current Properties:
[core]
disable_usage_reporting: [True]
Logs Directory: [/Users/myname/.config/gcloud/logs]
Last Log File: [/Users/myname/.config/gcloud/logs/2022.08.10/15.35.06.807614.log]
git: [git version 2.32.0 (Apple Git-132)]
ssh: [OpenSSH_8.1p1, LibreSSL 2.7.3]
Update on this, just disable the ssl validation and everything will work.
gcloud config set auth/disable_ssl_validation True

AWS Elasticbeanstalk with Django: Incorrect application version found on all instances

I'm trying to deploy a django application on elasticbeanstalk. It has been working fine then suddenly stopped and I cannot figure out why.
When I do eb deploy I get
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
INFO: New application version was deployed to running EC2 instances.
INFO: Environment update completed successfully.
Alert: An update to the EB CLI is available. Run "pip install --upgrade awsebcli" to get the latest version.
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 if ! grep -q 'WSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/wsgi.conf ; then echo -e 'WSGIApplicationGroup %{GLOBAL}' | sudo tee -a /etc/httpd/conf.d/wsgi.conf; fi;
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 sudo /etc/init.d/httpd reload
Reloading httpd: [ OK ]
When I then run eb health, I get
Incorrect application version found on all instances. Expected version
"app-c56a-190604_135423" (deployment 300).
If I eb ssh and look in /opt/python/current there is nothing there so nothing is being copied across
I think something may be wrong with .elasticbeanstalk/config.yml. Somehow the directory was deleted and setup again. This is the config.yml
branch-defaults:
master:
environment: app-prod
scoring-dev:
environment: app-dev
environment-defaults:
app-prod:
branch: null
repository: null
global:
application_name: my-app
default_ec2_keyname: am-app_011017
default_platform: arn:aws:elasticbeanstalk:us-west-2::platform/Python 2.7 running
on 64bit Amazon Linux/2.3.1
default_region: us-west-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application
Please, any ideas about how to troubleshoot?
I upgraded to the latest AWS stack for python 2.7 and that sorted it
I faced the same problem and the cause the command timeout
Default max deployment time -Command timeout- is 600 (10 minutes)
Your Environment → Configuration → Deployment preferences → Command timeout
Increase the Deployment preferences for example 1800
or upgrade the instance type to work faster

Metabase on Google App Engine

I'm trying to set up Metabase on a gcloud engine using Google Cloud SQL (MySQL).
I've got it running using this git and this app.yaml:
runtime: custom
env: flex
# Metabase does not support horizontal scaling
# https://github.com/metabase/metabase/issues/2754
# https://cloud.google.com/appengine/docs/flexible/java/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
env_variables:
# MB_JETTY_PORT: 8080
MB_DB_TYPE: mysql
MB_DB_DBNAME: [db_name]
# MB_DB_PORT: 5432
MB_DB_USER: [db_user]
MB_DB_PASS: [db_password]
# MB_DB_HOST: 127.0.0.1
CLOUD_SQL_INSTANCE: [project-id]:[location]:[instance-id]
I have 2 issues:
The Metabase fails in connecting to the Cloud SQL - the Cloud SQL is part of the same project and App Engine is authorized.
After I create my admin user in Metabase, I am only able to login for a few seconds (and only sometimes), but it keeps throwing me to either /setup or /auth/login saying the password doesn't match (when it does).
I hope someone can help - thank you!
So, we just got metabase running in Google App Engine with a Cloud SQL instance running PostgreSQL and these are the steps we went through.
First, create a Dockerfile:
FROM gcr.io/google-appengine/openjdk:8
EXPOSE 8080
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENV JAVA_TOOL_OPTIONS "-Xmx1g"
ADD https://downloads.metabase.com/enterprise/v1.1.6/metabase.jar $APP_DESTINATION
We tried pushing the memory further down, but 1 GB seemed to be the sweet spot. On to the app.yaml:
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 10
readiness_check:
path: "/api/health"
check_interval_sec: 5
timeout_sec: 5
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 600
beta_settings:
cloud_sql_instances: <Instance-Connection-Name>=tcp:5432
env_variables:
MB_DB_DBNAME: 'metabase'
MB_DB_TYPE: 'postgres'
MB_DB_HOST: '172.17.0.1'
MB_DB_PORT: '5432'
MB_DB_USER: '<username>'
MB_DB_PASS: '<password>'
MB_JETTY_PORT: '8080'
Note the beta_settings field at the bottom, which handles what akilesh raj was doing manually. Also, the trailing =tcp:5432 is required, since metabase does not support unix sockets yet.
Relevant documentation can be found here.
Although I am not sure of the reason, I think authorizing the service account of App engine is not enough for accessing cloud SQL.
In order to authorize your App to access your Cloud SQL you can do either of both methods:
Within the app.yaml file, configure an environment variable pointing to a a service account key file with a correct authorization configuration to Cloud SQL :
env_variables:
GOOGLE_APPLICATION_CREDENTIALS=[YOURKEYFILE].json
Your code executes a fetch of an authorized service account key from a bucket, and loads it afterwards with the help of the Cloud storage Client library. Seeing your runtime is custom, the pseudocode which would be translated into the code you use is the following:
.....
It is better to use the Cloud proxy to connect to the SQL instances. This way you do not have to authorize the instances in CloudSQL every time there is a new instance.
More on CloudProxy here
As for setting up Metabase in the Google App Engine, I am including the app.yaml and Dockerfile below.
The app.yaml file,
runtime: custom
env: flex
manual_scaling:
instances: 1
env variables:
MB_DB_TYPE: mysql
MB_DB_DBNAME: metabase
MB_DB_PORT: 3306
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
METABASE_SQL_INSTANCE: instance_name
The Dockerfile,
FROM gcr.io/google-appengine/openjdk:8
# Set locale to UTF-8
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Install CloudProxy
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
RUN chmod +x ./cloud_sql_proxy
#Download the latest version of Metabase
ADD http://downloads.metabase.com/v0.21.1/metabase.jar ./metabase.jar
CMD nohup ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar /startup/metabase.jar

Cannot create image with packer in Google Cloud due to service account error

I have created a service account in Gcloud.
Installed Gcloud on my mac.
When ever I run my packer template, it complains about this account which I have no idea where it is coming from.
Packer template:
{
"builders": [
{
"type": "googlecompute",
"account_file": "/Users/Joe/Downloads/account.json",
"project_id": "rare-truck-123456",
"source_image": "centos-7-v20180129",
"zone": "us-west1-a",
"ssh_username": "centos"
}
]
}
Error:
➜ packer git:(master) ✗ packer build release_google_image.json
googlecompute output will be in this color.
==> googlecompute: Checking image does not exist...
==> googlecompute: Creating temporary SSH key for instance...
==> googlecompute: Using image: centos-7-v20180129
==> googlecompute: Creating instance...
googlecompute: Loading zone: us-west1-a
googlecompute: Loading machine type: n1-standard-1
googlecompute: Loading network: default
googlecompute: Requesting instance creation...
googlecompute: Waiting for creation operation to complete...
==> googlecompute: Error creating instance: 1 error(s) occurred:
==> googlecompute:
==> googlecompute: * The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
Build 'googlecompute' errored: Error creating instance: 1 error(s) occurred:
* The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
==> Some builds didn't complete successfully and had errors:
--> googlecompute: Error creating instance: 1 error(s) occurred:
* The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
==> Builds finished but no artifacts were created.
Why is it trying to use 123412341234-compute#developer.gserviceaccount.com?
I created a service account with compute admin v1 permissions under my project in google cloud and I downloaded my json file and renamed it to accounts.json. The name of that service account is different (release-builder#rare-truck-123456.iam.gserviceaccount.com), but packer seems to ignore it and go after some strange account.
Even my cli command gcloud info gives back the right service account:
Google Cloud SDK [188.0.1]
Platform: [Mac OS X, x86_64] ('Darwin', 'Alexs-MacBook-Pro.local', '16.7.0', 'Darwin Kernel Version 16.7.0: Thu Jan 11 22:59:40 PST 2018; root:xnu-3789.73.8~1/RELEASE_X86_64', 'x86_64', 'i386')
Python Version: [2.7.14 (default, Sep 25 2017, 09:53:22) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]]
Python Location: [/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python]
Site Packages: [Disabled]
Installation Root: [/Users/Joe/Downloads/google-cloud-sdk]
Installed Components:
core: [2018.02.08]
gsutil: [4.28]
bq: [2.0.28]
System PATH: [/Users/Joe/Downloads/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin]
Python PATH: [/Users/Joe/Downloads/google-cloud-sdk/lib/third_party:/Users/Joe/Downloads/google-cloud-sdk/lib:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [True]
Kubectl on PATH: [False]
Installation Properties: [/Users/Joe/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/Joe/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/Joe/.config/gcloud/configurations/config_default]
Account: [release-builder#rare-truck-123456.iam.gserviceaccount.com]
Project: [rare-truck-123456]
Current Properties:
[core]
project: [rare-truck-123456]
account: [release-builder#rare-truck-123456.iam.gserviceaccount.com]
disable_usage_reporting: [True]
[compute]
region: [us-west1]
zone: [us-west1-a]
Logs Directory: [/Users/Joe/.config/gcloud/logs]
Last Log File: [/Users/Joe/.config/gcloud/logs/2018.02.09/15.51.18.911677.log]
git: [git version 2.14.3 (Apple Git-98)]
ssh: [OpenSSH_7.4p1, LibreSSL 2.5.0]
Google Compute Engine instances use default services account to have a better integration with Google Platform.
As you can read on the documentation [1]: "(...)If the user will be managing virtual machine instances that are configured to run as a service account, you must also grant the roles/iam.serviceAccountActor role."
You need to add the role "Service Account User" to your service account in order to be able to create Google Compute Engine instances.
[1] https://cloud.google.com/iam/docs/understanding-roles#compute_name_short_roles

State 'boto_rds.present' is unavailable. Ec2/saltstack

Im trying to create a RDS instance using boto_rds.present, the code looks like:
rds_instance_abc:
boto_rds.present:
- name: learn_rds1
- allocated_storage: 10
- storage_type: gp2
- db_name: abc_testing
- db_instance_class: db.t2.micro
- engine: MySQL
- master_username: root
- master_user_password: root
- region: us-east-1}
- keyid: fsdfsdfsdfs
- key: fsdfsdfsfsdfsdfsfsdfs
After salt-call state.highstate I have this error:
local:
----------
ID: rds_instance_abc
Function: boto_rds.present
Name: learn_rds1
Result: False
Comment: State 'boto_rds.present' found in SLS u'tester' is unavailable
Started:
Duration:
Changes:
Summary
------------
Succeeded: 0
Failed: 1
I have installed boto in my instance: pip27 install boto
If I use the boto’s RDS interface via shell, the rds instance is created fine.
Do I missing something in my state?
This is the version report:
Salt: 2014.7.5
Python: 2.6.9 (unknown, Apr 1 2015, 18:16:00)
Jinja2: 2.7.2
M2Crypto: 0.21.1
msgpack-python: 0.4.6
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 3.2.5
Mako: Not Installed
The boto_rds Salt module isn't available in the 2014.7 branch. it made it into the 2015.5 branch, which should be released soon.
You could probably take the boto_rds.py module and state and deploy them from your /srv/salt/_modules and /srv/salt/_state directories on your Salt master.
https://github.com/saltstack/salt/blob/58c7ba2b5fc68f5d1fad6540900560b990bc90f7/salt/states/boto_rds.py#L6