State 'boto_rds.present' is unavailable. Ec2/saltstack - amazon-web-services

Im trying to create a RDS instance using boto_rds.present, the code looks like:
rds_instance_abc:
boto_rds.present:
- name: learn_rds1
- allocated_storage: 10
- storage_type: gp2
- db_name: abc_testing
- db_instance_class: db.t2.micro
- engine: MySQL
- master_username: root
- master_user_password: root
- region: us-east-1}
- keyid: fsdfsdfsdfs
- key: fsdfsdfsfsdfsdfsfsdfs
After salt-call state.highstate I have this error:
local:
----------
ID: rds_instance_abc
Function: boto_rds.present
Name: learn_rds1
Result: False
Comment: State 'boto_rds.present' found in SLS u'tester' is unavailable
Started:
Duration:
Changes:
Summary
------------
Succeeded: 0
Failed: 1
I have installed boto in my instance: pip27 install boto
If I use the boto’s RDS interface via shell, the rds instance is created fine.
Do I missing something in my state?
This is the version report:
Salt: 2014.7.5
Python: 2.6.9 (unknown, Apr 1 2015, 18:16:00)
Jinja2: 2.7.2
M2Crypto: 0.21.1
msgpack-python: 0.4.6
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 3.2.5
Mako: Not Installed

The boto_rds Salt module isn't available in the 2014.7 branch. it made it into the 2015.5 branch, which should be released soon.
You could probably take the boto_rds.py module and state and deploy them from your /srv/salt/_modules and /srv/salt/_state directories on your Salt master.
https://github.com/saltstack/salt/blob/58c7ba2b5fc68f5d1fad6540900560b990bc90f7/salt/states/boto_rds.py#L6

Related

Google Cloud: ERROR: Reachability Check failed

I followed this answer already. But it didn't help, also, I re-installed gcloud CLI, but now I am not able to install CLI anymore because of the following error.
Here is my output for ./google-cloud-sdk/bin/gcloud init
ERROR: Reachability Check failed.
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with httplib2 (SSLCertVerificationError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with httplib2 (SSLCertVerificationError)
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with requests (SSLError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with requests (SSLError)
Network connection problems may be due to proxy or firewall settings.
Also, I am not behind any corporate proxy.
It was working perfectly few days ago, until today.I did not changed any settings whatsoever, I didn't install any new services whatsoever.
Output for ./google-cloud-sdk/bin/gcloud info.
./google-cloud-sdk/bin/gcloud info
Google Cloud SDK [354.0.0]
Python Version: [3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08) [Clang 6.0 (clang-600.0.57)]]
Python Location: [/Users/myname/.config/gcloud/virtenv/bin/python3]
Site Packages: [Enabled]
Installation Root: [/Users/myname/Downloads/google-cloud-sdk]
Installed Components:
gsutil: [4.67]
core: [2021.08.20]
bq: [2.0.71]
System PATH: [/Users/myname/.config/gcloud/virtenv/bin:/Users/myname/Downloads/apache-maven-3.8.4/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/munki:/usr/local/opt/go/libexec/bin:/Users/myname/go/bin]
Python PATH: [/Users/myname/Downloads/./google-cloud-sdk/lib/third_party:/Users/myname/Downloads/google-cloud-sdk/lib:/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload:/Users/myname/.config/gcloud/virtenv/lib/python3.7/site-packages]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/myname/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/myname/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/myname/.config/gcloud/configurations/config_default]
Account: [None]
Project: [None]
Current Properties:
[core]
disable_usage_reporting: [True]
Logs Directory: [/Users/myname/.config/gcloud/logs]
Last Log File: [/Users/myname/.config/gcloud/logs/2022.08.10/15.35.06.807614.log]
git: [git version 2.32.0 (Apple Git-132)]
ssh: [OpenSSH_8.1p1, LibreSSL 2.7.3]
Update on this, just disable the ssl validation and everything will work.
gcloud config set auth/disable_ssl_validation True

DNS not found from single Docker container running on Beanstalk

I have an AWS Elastic Beanstalk environment and application running the "64bit Amazon Linux 2 v3.0.2 running Docker" solution stack in a standard way. (I followed the AWS documentation.)
I have deployed a Dockerrun.aws.json file to it, and found that it has DNS issues.
To troubleshoot, I SSHed into the EC2 instance where it is running and found that on this instance an nslookup of any of the hostnames in question runs fine.
But, running the nslookup from within Docker such as with . . .
sudo run busybox nslookup www.google.com
. . . yields the result:
*** Can't find www.google.com: No answer
Adding the typical solutions such as passing the --dns x.x.x.x argument or --network=host arguments does not fix the issue. It still times out contacting DNS.
Any thoughts as to what the issue might be? Here is a Docker info:
$ sudo docker info
Client:
Debug Mode: false
Server:
Containers: 30
Running: 1
Paused: 0
Stopped: 29
Images: 4
Server Version: 19.03.6-ce
Storage Driver: devicemapper
Pool Name: docker-docker--pool
Pool Blocksize: 524.3kB
Base Device Size: 107.4GB
Backing Filesystem: ext4
Udev Sync Supported: true
Data Space Used: 5.403GB
Data Space Total: 12.72GB
Data Space Available: 7.314GB
Metadata Space Used: 3.965MB
Metadata Space Total: 16.78MB
Metadata Space Available: 12.81MB
Thin Pool Minimum Free Space: 1.271GB
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: ff48f57fc83a8c44cf4ad5d672424a98ba37ded6
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.181-108.257.amzn1.x86_64
Operating System: Amazon Linux AMI 2018.03
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.79GiB
Name:
ID:
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
Live Restore Enabled: false
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
Thank you

AWS Elasticbeanstalk with Django: Incorrect application version found on all instances

I'm trying to deploy a django application on elasticbeanstalk. It has been working fine then suddenly stopped and I cannot figure out why.
When I do eb deploy I get
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
INFO: New application version was deployed to running EC2 instances.
INFO: Environment update completed successfully.
Alert: An update to the EB CLI is available. Run "pip install --upgrade awsebcli" to get the latest version.
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 if ! grep -q 'WSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/wsgi.conf ; then echo -e 'WSGIApplicationGroup %{GLOBAL}' | sudo tee -a /etc/httpd/conf.d/wsgi.conf; fi;
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 sudo /etc/init.d/httpd reload
Reloading httpd: [ OK ]
When I then run eb health, I get
Incorrect application version found on all instances. Expected version
"app-c56a-190604_135423" (deployment 300).
If I eb ssh and look in /opt/python/current there is nothing there so nothing is being copied across
I think something may be wrong with .elasticbeanstalk/config.yml. Somehow the directory was deleted and setup again. This is the config.yml
branch-defaults:
master:
environment: app-prod
scoring-dev:
environment: app-dev
environment-defaults:
app-prod:
branch: null
repository: null
global:
application_name: my-app
default_ec2_keyname: am-app_011017
default_platform: arn:aws:elasticbeanstalk:us-west-2::platform/Python 2.7 running
on 64bit Amazon Linux/2.3.1
default_region: us-west-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application
Please, any ideas about how to troubleshoot?
I upgraded to the latest AWS stack for python 2.7 and that sorted it
I faced the same problem and the cause the command timeout
Default max deployment time -Command timeout- is 600 (10 minutes)
Your Environment → Configuration → Deployment preferences → Command timeout
Increase the Deployment preferences for example 1800
or upgrade the instance type to work faster

salt cloud error deploying to AWS

When I try to deploy to amazon EC2 using salt cloud, I’m getting this error:
[root#salt salt]# salt-cloud -p ec2_private_win_r3.xlarge server00009
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '33b43015-518e-4865-88e7-b6432e61b0db'}
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '4b88b080-ad32-4388-a133-4322b1c08c04'}
[ERROR ] There was a profile error: 'NoneType' object has no attribute 'copy'
I’ve verified the AWS keys that I’m using and I’m able to list and even launch new instances using the aws command line with the keys that I’m using in the cloud provider file:
## Gov Cloud Non Prod environment
company-govcloud-nonprod-us-east-1:
# Set up the location of the salt master
minion:
master: 10.0.2.15
# Set up grains information, which will be common for all nodes
# using this driver
grains:
node_type: broker
# Valid options are:
# private_ips - The salt-cloud command is run inside the EC2
# public_ips - The salt-cloud command is run outside of EC2
#
ssh_interface: private_ips
# Optionally configure the Windows credential validation number of
# t-tdetries and delay between retries. This defaults to 10 retries
# with a one second delay betdwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
id: 'AKIAIATLQ4FTDDA6BV7A'
key: 'asdfasdsfadsadasasdafadsadfafasdasda’
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/company-timd
keyname: company-timd
#securitygroup: core-sg-default
# Optionally configure default region
# Use salt-cloud --list-locations <driver> to obtain valid regions
#
location: us-east-1
availability_zone: us-east-1c
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: root
# Optionally add an IAM profile
#iam_profile: 'arn:aws:iam::xxxxxxxxxxxx:role/rl-company-admin'
driver: ec2
And this is the profile that I’m trying to use:
## Windows Server 2012 Alteryx & Tableau
ec2_private_win_r3.xlarge:
provider: company-govcloud-nonprod-us-east-1
image: ami-xxxxxxx
size: r3.xlarge
network_interfaces:
- DeviceIndex: 0
SubnetId: subnet-xxxxxxx
SecurityGroupId: sg-xxxxxx
PrivateIpAddresses:
- Primary: True
AssociatePublicIpAddress: False
block_device_mappings:
- DeviceName: /dev/sda1
Ebs.VolumeSize: 120
Ebs.VolumeType: gp2
- DeviceName: /dev/sdf
Ebs.VolumeSize: 250
Ebs.VolumeType: gp2
tag: {'Engagement': '999999999999', 'Owner': 'Tim', 'Name': 'non-production', 'Environment': 'COMPANY-Grouper'}
I tried commenting out the IAM profile in the cloud provider definition. I’ve checked and the AWS credentials I’m using has administrator access in IAM.
Here's my version report
[root#salt ~]# salt-cloud --versions-report
Salt Version:
Salt: 2016.11.5
Dependency Versions:
Apache Libcloud: 0.20.1
cffi: 1.6.0
cherrypy: 3.2.2
dateutil: 2.6.0
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.8
mysql-python: Not Installed
pycparser: 2.14
pycrypto: 2.6.1
pycryptodome: 3.4.3
pygit2: Not Installed
Python: 2.7.5 (default, Nov 6 2016, 00:28:07)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: centos 7.2.1511 Core
machine: x86_64
release: 3.10.0-327.el7.x86_64
system: Linux
version: CentOS Linux 7.2.1511 Core
How can I solve this problem?
Are you trying to launch a windows EC2 instance with a ssh_username? That may be breaking it.

create VPC in salt using boto.vpc

I’m pretty close to having a vpc created I think. I’m running into an error applying it. It may have to do with an outdated boto module in python.
This is what I get when I try to apply the state:
[root#salt dlab]# salt '*' state.apply
salt.localdomain:
----------
ID: Ensure VPC exists
Function: boto_vpc.present
Name: myvpc
Result: False
Comment: State 'boto_vpc.present' was not found in SLS 'vpc'
Reason: 'boto_vpc' __virtual__ returned False
Changes:
Summary for salt.localdomain
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 0.000 ms
ERROR: Minions returned with non-zero exit code
I can see the module with the show_top command:
[root#salt ~]# salt '*' state.show_top
salt.localdomain:
----------
dlab:
- vpc
This is what I have in my top file:
[root#salt ~]# cat /srv/salt/dlab/top.sls
dlab:
'*':
- vpc
And this is all I have in my init:
[root#salt ~]# cat /srv/salt/dlab/vpc/init.sls
Ensure VPC exists:
boto_vpc.present:
- name: myvpc
- cidr_block: 10.10.11.0/24
- dns_hostnames: True
- region: us-east-1
- keyid: removed
- key: removed
Again, the reason for the error may be due to an old boto library. This is the version that I have:
[root#salt ~]# pip list | grep boto
boto (2.42.0)
botocore (1.4.60)
But the code specifies a newer version:
required_boto_version = '2.8.0'
boto_vpc documentation
I tried to upgrade the version of boto that I was using with the following command:
[root#salt ~]# pip install boto --upgrade
Requirement already up-to-date: boto in /usr/lib/python2.7/site-packages
But that’s the response I get. Any ideas on how I can get the required version? I'm using this on CentOS 7.
Make sure you have installed the boto and boto3 modules.
I had the same error, but once installed both modules, it got fixed.