GCP deploy instance fails from ansible script - google-cloud-platform

I've been deploying clusters in GCP via ansible scripts for more then a year now, but all of a sudden one of my scripts keeps giving me this error:
libcloud.common.google.GoogleBaseError: u\"The zone 'projects/[project]/zones/europe-west1-d' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
The obvious reason would be that I don't have enough resources, but not a whole lot has changed and quotas look good:
The ansible script itself doesn't ask for a lot.
I'm creating 3 instances of n1-standard-4 with 100GB SSD.
See snippet of script below:
tasks:
- name: create boot disks
gce_pd:
disk_type: pd-ssd
image: "debian-9-stretch-v20171025"
name: "{{ item.node }}-disk"
size_gb: 100
state: present
zone: "europe-west1-d"
service_account_email: "{{ service_account_email }}"
credentials_file: "{{ credentials_file }}"
project_id: "{{ project_id }}"
with_items: "{{nodes}}"
async: 3600
poll: 2
- name: create instances
gce:
instance_names: "{{item.node}}"
zone: "europe-west1-d"
machine_type: "n1-standard-4"
preemptible: "{{ false if item.num == '0' else true }}"
disk_auto_delete: true
disks:
- name: "{{ item.node }}-disk"
mode: READ_WRITE
state: present
service_account_email: "{{ service_account_email }}"
service_account_permissions: "compute-rw"
credentials_file: "{{ credentials_file }}"
project_id: "{{ project_id }}"
tags: "elasticsearch"
register: gce_raw_results
with_items: "{{nodes}}"
async: 3600
poll: 2
Update 1:
The service account is editor of the entire project. So right issue seems unlikely.
It started happening March 24 2018. And every night since then. So if it's a 'out of stock' issue that would be very coincidental, right?
Besides I have been running this script the entire day so far and it fails most of the time (see below for success).
I've tested a few times and it might have something to do with the 'preemptible' flag on the instance. (I start 3 nodes, but at least the first has to stay up to at least work) => preemptible: "{{ false if item.num == '0' else true }}"
If I turn off preemptible (false) then it runs without a hitch.
The 'workaround' seems to be just don't use preemptible instances, but this used to work for a year without failing once. Did something change?
Did GCP's API change? Did ansible gce not implement these changes?
The full error is:
TASK [Gathering Facts]
****************************************************************************************************************************************************************************************************************************************************************************************************** ok: [localhost]
TASK [create boot disks]
**************************************************************************************************************************************************************************************************************************************************************************************************** changed: [localhost] => (item={u'node': u'elasticsearch-link-0',
u'ip_field': u'private_ip', u'zone': u'europe-west1-d',
u'cluster_name': u'elasticsearch-link', u'num': u'0', u'machine_type':
u'n1-standard-4', u'project_id': u'[projectid]'}) changed: [localhost]
=> (item={u'node': u'elasticsearch-link-1', u'ip_field': u'private_ip', u'zone': u'europe-west1-d', u'cluster_name':
u'elasticsearch-link', u'num': u'1', u'machine_type':
u'n1-standard-4', u'project_id': u'[projectid]'}) ok: [localhost] =>
(item={u'node': u'elasticsearch-link-2', u'ip_field': u'private_ip',
u'zone': u'europe-west1-d', u'cluster_name': u'elasticsearch-link',
u'num': u'2', u'machine_type': u'n1-standard-4', u'project_id':
u'[projectid]'})
TASK [create instances]
***************************************************************************************************************************************************************************************************************************************************************************************************** changed: [localhost] => (item={u'node': u'elasticsearch-link-0',
u'ip_field': u'private_ip', u'zone': u'europe-west1-d',
u'cluster_name': u'elasticsearch-link', u'num': u'0', u'machine_type':
u'n1-standard-4', u'project_id': u'[projectid]'}) changed: [localhost]
=> (item={u'node': u'elasticsearch-link-1', u'ip_field': u'private_ip', u'zone': u'europe-west1-d', u'cluster_name':
u'elasticsearch-link', u'num': u'1', u'machine_type':
u'n1-standard-4', u'project_id': u'[projectid]'}) failed: [localhost]
(item={u'node': u'elasticsearch-link-2', u'ip_field': u'private_ip',
u'zone': u'europe-west1-d', u'cluster_name': u'elasticsearch-link',
u'num': u'2', u'machine_type': u'n1-standard-4', u'project_id':
u'[projectid]'}) => {"ansible_job_id": "371957735383.2688",
"changed": false, "cmd":
"/tmp/.ansible-airflow/ansible-tmp-1522742180.0-71790706749341/gce.py",
"data": "", "failed": 1, "finished": 1, "item": {"cluster_name":
"elasticsearch-link", "ip_field": "private_ip", "machine_type":
"n1-standard-4", "node": "elasticsearch-link-2", "num": "2",
"project_id": "[projectid]", "zone": "europe-west1-d"}, "msg":
"Traceback (most recent call last):\n File
\"/tmp/.ansible-airflow/ansible-tmp-1522742180.0-71790706749341/async_wrapper.py\",
line 158, in _run_module\n (filtered_outdata, json_warnings) =
_filter_non_json_lines(outdata)\n File \"/tmp/.ansible-airflow/ansible-tmp-1522742180.0-71790706749341/async_wrapper.py\",
line 99, in _filter_non_json_lines\n raise ValueError('No start of
json char found')\nValueError: No start of json char found\n",
"stderr": "Traceback (most recent call last):\n File
\"/tmp/ansible_OnIK1e/ansible_module_gce.py\", line 750, in \n
main()\n File \"/tmp/ansible_OnIK1e/ansible_module_gce.py\", line
712, in main\n module, gce, inames, number)\n File
\"/tmp/ansible_OnIK1e/ansible_module_gce.py\", line 524, in
create_instances\n instance, lc_machine_type, lc_image(),
**gce_args\n File \"/usr/local/lib/python2.7/dist-packages/libcloud/compute/drivers/gce.py\",
line 3874, in create_node\n self.connection.async_request(request,
method='POST', data=node_data)\n File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py\",
line 784, in async_request\n response = request(**kwargs)\n File
\"/usr/local/lib/python2.7/dist-packages/libcloud/compute/drivers/gce.py\",
line 121, in request\n response = super(GCEConnection,
self).request(*args, **kwargs)\n File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/google.py\",
line 806, in request\n *args, **kwargs)\n File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py\",
line 641, in request\n response = responseCls(**kwargs)\n File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py\",
line 163, in init\n self.object = self.parse_body()\n File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/google.py\",
line 268, in parse_body\n raise GoogleBaseError(message,
self.status, code)\nlibcloud.common.google.GoogleBaseError: u\"The
zone 'projects/[projectid]/zones/europe-west1-d' does not have enough
resources available to fulfill the request. Try a different zone, or
try again later.\"\n", "stderr_lines": ["Traceback (most recent call
last):", " File \"/tmp/ansible_OnIK1e/ansible_module_gce.py\", line
750, in ", " main()", " File
\"/tmp/ansible_OnIK1e/ansible_module_gce.py\", line 712, in main", "
module, gce, inames, number)", " File
\"/tmp/ansible_OnIK1e/ansible_module_gce.py\", line 524, in
create_instances", " instance, lc_machine_type, lc_image(),
**gce_args", " File \"/usr/local/lib/python2.7/dist-packages/libcloud/compute/drivers/gce.py\",
line 3874, in create_node", "
self.connection.async_request(request, method='POST',
data=node_data)", " File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py\",
line 784, in async_request", " response = request(**kwargs)", "
File
\"/usr/local/lib/python2.7/dist-packages/libcloud/compute/drivers/gce.py\",
line 121, in request", " response = super(GCEConnection,
self).request(*args, **kwargs)", " File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/google.py\",
line 806, in request", " *args, **kwargs)", " File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py\",
line 641, in request", " response = responseCls(**kwargs)", " File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py\",
line 163, in init", " self.object = self.parse_body()", " File
\"/usr/local/lib/python2.7/dist-packages/libcloud/common/google.py\",
line 268, in parse_body", " raise GoogleBaseError(message,
self.status, code)", "libcloud.common.google.GoogleBaseError: u\"The
zone 'projects/[projectid]/zones/europe-west1-d' does not have enough
resources available to fulfill the request. Try a different zone, or
try again later.\""]}
to retry, use: --limit #/usr/local/airflow/ansible/playbooks/elasticsearch-link-cluster-create.retry

The error message is not showing that is an error with the quota, but rather an issue with the zone resources, I would advise you to try a new zone.
Quoting from the documentation:
Even if you have a regional quota, it is possible that a resource might not be available in a specific zone. For example, you might have quota in region us-central1 to create VM instances, but might not be able to create VM instances in the zone us-central1-a if the zone is depleted. In such cases, try creating the same resource in another zone, such as us-central1-f.
Therefore when creating the script you should take this possibility into account even if it is not so common.
This issue is even more highlithed in case of preentible instances since:
Preemptible instances are finite Compute Engine resources, so they might not always be available. [...] these instances if it requires access to those resources for other tasks. Preemptible instances are excess Compute Engine capacity so their availability varies with usage.
UPDATE
To doublecheck what I am saying you can try to keep the preentible flag and change the zone to be sure the script it is working properly and it is a stockout happening during the evening (and since during the day it works this should be the case).
If the issue it is really the availability -| you might consider to spin up preentible instance and if not available, catch the error and then either rely on normal one or on a different zone |-
UPDATE2
As I promised I created on your behalf the feature request, you can follow the updates on the public tracker.
I advise you to start it in order to receive the updates on the email:
https://issuetracker.google.com/77734062

Related

Ansible AWS using a role_arn with ansible playbook not giving permissions

I have been stuck on this issue for days, and I can't seem to find anything around the exact same problem I've been having. Currently, I have credentials and config set up like so:
~/.aws/credentials
[default]
aws_access_key_id = ###########
aws_secret_access_key = ######################
[dev]
role_arn=arn:aws:iam::############:role/###AccessRole
source_profile=default
~/.aws/config
[default]
region = us-east-1
output = json
[profile dev]
role_arn = arn:aws:iam::############:role/###AccessRole
source_profile = default
When I run aws cli commands, everything runs fine. If I end up using AWS creds which have admin permissions, it works - but I can't do this in our system.
Currently, the default role can't access anything on purpose, it assumes the dev role. However, I can't get Ansible to recognise dev. I configured it all, and it works across Terraform, AWS CLI, Git. Currently, this is my input and error using ansible-playbook. I have removed certain info/linted the output below. As you can see, I'm using ec2.ini and ec2.py.
Has anyone come across this? Is it to do with using role_arn with Ansible? I have tried a plethora of things to get this to work, the state below is the current state of things.
Thanks in advance!
AWS_PROFILE=dev ansible-playbook -i ./inventory/ec2.py playbook.yml --private-key ###.pem
----
[WARNING]: * Failed to parse {home}/Ansible/Bastion/inventory/ec2.py with script
plugin: Inventory script ({home}/Ansible/Bastion/inventory/ec2.py) had an
execution error: Traceback (most recent call last): File
"{home}/Ansible/Bastion/inventory/ec2.py", line 1712, in <module>
Ec2Inventory() File "{home}Ansible/Bastion/inventory/ec2.py", line 285, in
__init__ self.do_api_calls_update_cache() File
"{home}/Ansible/Bastion/inventory/ec2.py", line 552, in do_api_calls_update_cache
self.get_instances_by_region(region) File
"{home}/Ansible/Bastion/inventory/ec2.py", line 608, in get_instances_by_region
conn = self.connect(region) File "{home}/Ansible/Bastion/inventory/ec2.py", line
570, in connect conn = self.connect_to_aws(ec2, region) File
"{home}/Ansible/Bastion/inventory/ec2.py", line 591, in connect_to_aws
sts_conn = sts.connect_to_region(region, **connect_args) File "{home}.local/lib/python2.7/site-
packages/boto/sts/__init__.py", line 51, in connect_to_region **kw_params) File
"{home}/.local/lib/python2.7/site-packages/boto/regioninfo.py", line 220, in connect return
region.connect(**kw_params) File "{home}/.local/lib/python2.7/site-packages/boto/regioninfo.py",
line 290, in connect return self.connection_cls(region=self, **kw_params) File
"{home}/.local/lib/python2.7/site-packages/boto/sts/connection.py", line 107, in __init__
provider=provider) File "{home}/.local/lib/python2.7/site-packages/boto/connection.py", line
1100, in __init__ provider=provider) File "{home}/.local/lib/python2.7/site-
packages/boto/connection.py", line 555, in __init__ profile_name) File
"{home}/.local/lib/python2.7/site-packages/boto/provider.py", line 201, in __init__
self.get_credentials(access_key, secret_key, security_token, profile_name) File
"{home}/.local/lib/python2.7/site-packages/boto/provider.py", line 297, in get_credentials
profile_name) boto.provider.ProfileNotFoundError: Profile "dev" not found!
[WARNING]: * Failed to parse {home}/Ansible/Bastion/inventory/ec2.py with ini
plugin: {home}/Ansible/Bastion/inventory/ec2.py:3: Error parsing host definition
''''': No closing quotation
[WARNING]: Unable to parse {home}/Ansible/Bastion/inventory/ec2.py as an inventory
source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does
not match 'all'
PLAY [Create kp and access instance] *********************************************************
TASK [Setup variables] *************************************************************************************
ok: [localhost]
TASK [Backup previous key] *************************************************************************
changed: [localhost]
TASK [generate SSH key]
*******************************************************************
changed: [localhost]
TASK [Start and register instance] *****************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Profile given for AWS was not found. Please fix and retry."}
PLAY RECAP *************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
EDITS:
Name Value Type Location
---- ----- ---- --------
profile dev manual --profile
access_key ****************#### assume-role
secret_key ****************#### assume-role
region <not set> None None
{
"UserId": "<ACCESS_KEY?>:botocore-session-##########",
"Account": "############",
"Arn": "arn:aws:sts::############:assumed-role/###AccessRole/botocore-session-##########"
}
ec2.py is too old, it only use boto and can't work with roles. It is also deprecated, the correct way now to use aws dynamic inventory is to use aws_ec2 from the aws collection. It used boto3, support roles and is in the end more flexible. If needed, there is a compatiblity ec2.py config here, but it is recommended always to use the aws_ec2 groups and variables directly for the long run.
Check this link in github for the full story

Salt masters behind ELB have flaky connection to minions

I am running the following setup at AWS:
Elastic Loadbalancer in front of two EC2 machines (Amazon Linux) with a docker container that the salt-master runs in
Two EC2 instances with salt-minions installed
The 'master' value in the minion config is set to the dns of the loadbalancer (SaltMaster-env-vpc-test.szfegmankg.us-east-1.elasticbeanstalk.com)
The ELB accepts all traffic from the minions
The Salt-masters accept all traffic from the ELB as well as from the minions
The Salt-masters PKI Folder is shared between the two masters
The Salt-masters have the same private+public keys
The Salt-masters run on 2017.7.1
The Salt-minions run on 2016.11.5 (I tried it with 2017.7.1, but got the same results)
The Salt-minions accept all traffic from the ELB as well as from the masters
The master config looks as follows:
open_mode: True
worker_threads: 20
auto_accept: True
log_level: error
log_level_logfile: debug
extension_modules: srv/salt/ext
rest_cherrypy:
port: 8000
disable_ssl: True
debug: True
external_auth:
pam:
saltdev:
- .*
- '#runner'
# Setting the job_cache to redis.
# The redis config settings are generated at the start of the docker container and
# will be written into /etc/salt/master.d/redis.conf
master_job_cache: redis
cache: redis
pki_dir: /etc/salt/pki/master/efs
The minion config looks as follows:
id: WIN-AB3GO7BJ72I
log_file: C:\salt.log
multiprocessing: False
log_level_logfile: debug
pki_dir: /conf/pki/minion
master: SaltMaster-env-vpc-test.szfegmankg.us-east-1.elasticbeanstalk.com
master_type: str
master_alive_interval: 30
open_mode: True
root_dir: c:\salt
ipc_mode: tcp
recon_default: 1000
recon_max: 199000
recon_randomize: True
In the master log files, I can see on both masters:
2017-09-05 10:06:18,118 [salt.utils.verify][DEBUG ][35] This salt-master instance has accepted 2 minion keys.
A salt-key -L on both masters yield the same result:
Accepted Keys:
WIN-AB3GO7BJ72I
WIN-EDMP9VB716B
Denied Keys:
Unaccepted Keys:
Rejected Keys:
So it looks like all is fine and everything should work. However, a test.ping is extremely flaky. Sometimes it works, but most of the time it doesnt.
Most of the time neither master gets any return from the minion and on the minion side I can see in the log that the minion never receives the message to execute 'test.ping' from the master.
Example 1:
test.ping from Master1:
root#d7383ff8f8bf:/# salt 'WIN-EDMP9VB716B' test.ping
[ERROR ] Exception raised when processing __virtual__ function for salt.loaded.int.cache.consul. Module will not be loaded: 'module' object has no attribute 'Consul'
[ERROR ] An un-handled exception was caught by salt's global exception handler:
KeyError: 'redis.ls'
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 476, in salt_main
client.run()
File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 173, in run
for full_ret in cmd_func(**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 805, in cmd_cli
**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1597, in get_cli_event_returns
connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 577, in connected_ids
search = self.cache.ls('minions')
File "/usr/lib/python2.7/dist-packages/salt/cache/__init__.py", line 244, in ls
return self.modules[fun](bank, **self._kwargs)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1113, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 101, in __getitem__
raise KeyError(key)
KeyError: 'redis.ls'
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 476, in salt_main
client.run()
File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 173, in run
for full_ret in cmd_func(**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 805, in cmd_cli
**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1597, in get_cli_event_returns
connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 577, in connected_ids
search = self.cache.ls('minions')
File "/usr/lib/python2.7/dist-packages/salt/cache/__init__.py", line 244, in ls
return self.modules[fun](bank, **self._kwargs)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1113, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 101, in __getitem__
raise KeyError(key)
KeyError: 'redis.ls'
I am aware that the redis error will be fixed soon https://github.com/saltstack/salt/issues/43295
Example 2:
test.ping from Master1, ~ 1 Minute after Example 1:
root#d7383ff8f8bf:/# salt 'WIN-EDMP9VB716B' test.ping
WIN-EDMP9VB716B:
True
Also during my tests, a test.ping from Master2 never succeeded.
I would like to know if there is some flaw in my setup that I am not seeing, or if Salt only works with an HA Proxy as an ELB?
Or maybe Salt doesn't work at all behind an ELB?
See https://github.com/saltstack/salt/issues/43368 for more answers.
TL;DR
Because there is no session stickyness for TCP connections, it is currently not possible to work with a saltmaster that is behind an ELB, if you use the ELB's ip/name as an entrypoint.

when using boto3 how to create an aws instance with a custom root volume size

When creating an instance in AWS the volume root size defaults to 8GB, I am trying to create an instance using boto3 but with a different default size, for example 300GB, I am currently trying something like this without success:
block_device_mappings = []
block_device_mappings.append({
'DeviceName': '/dev/sda1',
'Ebs': {
'VolumeSize': 300,
'DeleteOnTermination': True,
'VolumeType': 'gp2'
}
Any idea of how to achieve this?
Most likely what is happening is that you're using an AMI that uses /dev/xvda instead of /dev/sda1 for its root volume.
AMIs these days support one of two types of virtualization, paravirtual (PV) or hardware virtualized (HVM), and PV images support /dev/sda1 for the root device name, whereas HVM images can specify either dev/xvda or /dev/sda1 (more from AWS Documentation).
You can add in an image check to determine what the AMI you're using sets its root volume to, and then use that information for your calls to create_images.
Here's a code snippet that makes a call out to describe_images, retrieves information about its RootDeviceName, and then uses that to configure the block device mapping.
import boto3
if __name__ == '__main__':
client = boto3.client('ec2')
# This grabs the Debian Jessie 8.6 image (us-east-1 region)
image_id = 'ami-49e5cb5e'
response = client.describe_images(ImageIds=[image_id])
device_name = response['Images'][0]['RootDeviceName']
print(device_name)
block_device_mappings = []
block_device_mappings.append({
'DeviceName': device_name,
'Ebs': {
'VolumeSize': 300,
'DeleteOnTermination': True,
'VolumeType': 'gp2'
}
})
# Whatever you need to create the instances
For reference, the call to describe_images returns a dict that looks like this:
{u'Images': [{u'Architecture': 'x86_64',
u'BlockDeviceMappings': [{u'DeviceName': '/dev/xvda',
u'Ebs': {u'DeleteOnTermination': True,
u'Encrypted': False,
u'SnapshotId': 'snap-0ddda62ff076afbc8',
u'VolumeSize': 8,
u'VolumeType': 'gp2'}}],
u'CreationDate': '2016-11-13T14:03:45.000Z',
u'Description': 'Debian jessie amd64',
u'EnaSupport': True,
u'Hypervisor': 'xen',
u'ImageId': 'ami-49e5cb5e',
u'ImageLocation': '379101102735/debian-jessie-amd64-hvm-2016-11-13-1356-ebs',
u'ImageType': 'machine',
u'Name': 'debian-jessie-amd64-hvm-2016-11-13-1356-ebs',
u'OwnerId': '379101102735',
u'Public': True,
u'RootDeviceName': '/dev/xvda',
u'RootDeviceType': 'ebs',
u'SriovNetSupport': 'simple',
u'State': 'available',
u'VirtualizationType': 'hvm'}],
'ResponseMetadata': {'HTTPHeaders': {'content-type': 'text/xml;charset=UTF-8',
'date': 'Mon, 19 Dec 2016 14:03:36 GMT',
'server': 'AmazonEC2',
'transfer-encoding': 'chunked',
'vary': 'Accept-Encoding'},
'HTTPStatusCode': 200,
'RequestId': '85a22932-7014-4202-92de-4b5ee6b7f73b',
'RetryAttempts': 0}}

Selenium Grid WebDriver Unable to create new remote session desired capabilities

I am trying out Selenium Grid. My tests are written in Selenium Python.
I have started the Grid hub on my local machine, I have registered the node for IE using a json file on the same machine.
I run a selenium sample test and I get the following error:
Unable to create new remote session desired capabilities
Full error trace:
Traceback (most recent call last):
File "E:\RL Fusion\projects\Selenium Grid\Selenium Grid Sample\Test1 working - try json config file 2\Test1.py", line 21, in setUp
desired_capabilities=desired_cap)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 89, in __init__
self.start_session(desired_capabilities, browser_profile)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 138, in start_session
'desiredCapabilities': desired_capabilities,
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 195, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 170, in check_response
raise exception_class(message, screen, stacktrace)
WebDriverException: Message: Unable to create new remote session. desired capabilities = Capabilities [{browserName=internet explorer, javascriptEnabled=true, platform=WINDOWS}], required capabilities = null
Build info: version: '3.0.0-beta3', revision: 'c7b525d', time: '2016-09-01 14:57:03 -0700'
System info: host: 'OptimusPrime-PC', ip: '192.168.0.2', os.name: 'Windows 8.1', os.arch: 'x86', os.version: '6.3', java.version: '1.8.0_31'
Driver info: driver.version: InternetExplorerDriver
Stacktrace:
at org.openqa.selenium.remote.ProtocolHandshake.createSession (ProtocolHandshake.java:80)
at org.openqa.selenium.remote.HttpCommandExecutor.execute (HttpCommandExecutor.java:141)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute (DriverCommandExecutor.java:82)
at org.openqa.selenium.remote.RemoteWebDriver.execute (RemoteWebDriver.java:597)
at org.openqa.selenium.remote.RemoteWebDriver.startSession (RemoteWebDriver.java:242)
at org.openqa.selenium.remote.RemoteWebDriver.startSession (RemoteWebDriver.java:228)
at org.openqa.selenium.ie.InternetExplorerDriver.run (InternetExplorerDriver.java:180)
at org.openqa.selenium.ie.InternetExplorerDriver.<init> (InternetExplorerDriver.java:172)
at org.openqa.selenium.ie.InternetExplorerDriver.<init> (InternetExplorerDriver.java:148)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (None:-2)
at sun.reflect.NativeConstructorAccessorImpl.newInstance (None:-1)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance (None:-1)
at java.lang.reflect.Constructor.newInstance (None:-1)
at org.openqa.selenium.remote.server.DefaultDriverProvider.callConstructor (DefaultDriverProvider.java:103)
at org.openqa.selenium.remote.server.DefaultDriverProvider.newInstance (DefaultDriverProvider.java:97)
at org.openqa.selenium.remote.server.DefaultDriverFactory.newInstance (DefaultDriverFactory.java:60)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:222)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:209)
at java.util.concurrent.FutureTask.run (None:-1)
at org.openqa.selenium.remote.server.DefaultSession$1.run (DefaultSession.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker (None:-1)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (None:-1)
at java.lang.Thread.run (None:-1)
json.cfg.json config implementation:
{
"class": "org.openqa.grid.common.RegistrationRequest",
"capabilities": [
{
"seleniumProtocol": "WebDriver",
"browserName": "internet explorer",
"version": "11",
"maxInstances": 1,
"platform" : "WIN7" }
],
"configuration" : {
"port": 5555,
"register": true,
"host": "192.168.0.6",
"proxy": "org.openqa.grid.selenium.proxy. DefaultRemoteProxy",
"maxSession": 2,
"hubHost": "192.168.0.6",
"role": "webdriver",
"registerCycle": 5000,
"hub": "http://192.168.0.6:4444/grid/register",
"hubPort": 4444,
"remoteHost": "http://localhost:4444"
}
}
the setup method in my Selenium Python file is:
def setUp(self):
desired_cap = {'browserName': 'internet explorer',
#'platform': 'WIN8_1',
'platform': 'WIN7',
'javascriptEnabled': True}
self.driver = webdriver.Remote(
command_executor='http://localhost:4444/wd/hub',
desired_capabilities=desired_cap)
What am i doing wrong? Is my desired Capabilities not configured properly?
I notice in the full trace log it says Win 8.1
I have mentioned Win7 for the platform. I do not know why it is trying for Win 8.1
I have now changed desired capabilities to the following:
desired_cap = {'browserName': 'internet explorer',
'platform': 'windows',
'javascriptEnabled': True,
'InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS': True
}
I now get the error:
WebDriverException: Message: Error forwarding the new session cannot find : Capabilities [{browserName=internet explorer, InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS=true, javascriptEnabled=true, platform=XP}]
I need some help please.
Thanks, Riaz
The Grid uses the below three attributes in its DefaultCapabilitiesMatcher to decide on which node should a new session request be routed to :
Platform
BrowserType
Browser version
In your case, based on what you changed, your test is requesting that a node that has IE running on Windows, but in your nodeConfig.json you have basically specified "WIN7".
I dont think specifying "WINDOWS" will work for you. You can try changing your desired capabilities to refer to WIN7 and that should work.
Just keep the setting for Security of IE with middle (middle to high) and enable protected Mode for all.
Then issue got resolved.

S3 module for downloading files is not working in ansible

This is the ansible code written for downloading files from S3 bucket "artefact-test".
- name: Download customization artifacts from S3
s3:
bucket: "artefact-test"
object: "cust/gitbranching.txt"
dest: "/home/ubuntu/"
mode: get
region: "{{ s3_region }}"
profile: "{{ s3_profile }}"
I have set the boto profile and aws profile too. I get different errors which i dont think are valid like -
failed: [127.0.0.1] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/dmittal/.ansible/tmp/ansible-tmp-1462436903.77-107775915578620/s3", line 2320, in <module>
main()
File "/home/dmittal/.ansible/tmp/ansible-tmp-1462436903.77-107775915578620/s3", line 304, in main
ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module)
File "/home/dmittal/.ansible/tmp/ansible-tmp-1462436903.77-107775915578620/s3", line 2273, in get_ec2_creds
region, ec2_url, boto_params = get_aws_connection_info(module)
File "/home/dmittal/.ansible/tmp/ansible-tmp-1462436903.77-107775915578620/s3", line 2260, in get_aws_connection_info
if not boto_supports_profile_name():
File "/home/dmittal/.ansible/tmp/ansible-tmp-1462436903.77-107775915578620/s3", line 2191, in boto_supports_profile_name
return hasattr(boto.ec2.EC2Connection, 'profile_name')
AttributeError: 'module' object has no attribute 'ec2'
failed: [127.0.0.1] => {"failed": true}
msg: Target bucket cannot be found
failed: [127.0.0.1] => {"failed": true}
msg: Target key cannot be found
Whereas the bucket and key specified both exists on AWS.The same thing works if i use AWS-CLI commands to do the same.
There seems to have been a bug related to this. You might want to try adding the following import statement in the file ansible/module_utils/ec2.py -
import boto.ec2
Something like this - https://github.com/ansible/ansible/blob/9cf43ce17f20171b5740a6e0773f666cb47e2d5e/lib/ansible/module_utils/ec2.py#L31