In my project, to be able to access a testing environment website I need to send a header request, otherwise I get a 404 error.
I'm running Selenium-wire using Jenkins on a server and running the browser on AWS Device Farm.
The thing is, some sites don't need the header and I can access normally, and for those the next config are working fine:
if browser_name == "chrome":
options = webdriver.ChromeOptions()
options.add_argument("--ignore-certificate-errors")
options.add_argument("--no-sandbox")
devicefarm_client = boto3.client("devicefarm", region_name="us-west-2")
testgrid_url_response = devicefarm_client.create_test_grid_url(
projectArn="arn:aws:devicefarm:us-west-2:111122223333:testgrid-project:123e4567-e89b-12d3-a456-426655440000", # < exemple project's Amazon Resource Name (ARN)
expiresInSeconds=300,
)
desired_capabilities = DesiredCapabilities.CHROME
desired_capabilities["platform"] = "windows"
driver = webdriver.Remote(
testgrid_url_response["url"], desired_capabilities, options=options,
seleniumwire_options={'auto_config': False, 'addr': '127.0.0.1'}
)
driver.set_window_size(1920, 1080)
driver.implicitly_wait(30)
driver.get("...")
For the site where I need to use the header I first start just adding the 'interceptor' function, and then I tried many other things:
elif browser_name == "chrome_1":
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--ignore-certificate-errors")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('--proxy-server="IP-of-the-machine-running-Jenkins":8087')
devicefarm_client = boto3.client("devicefarm", region_name="us-west-2")
testgrid_url_response = devicefarm_client.create_test_grid_url(
projectArn="arn:aws:devicefarm:us-west-2:111122223333:testgrid-project:123e4567-e89b-12d3-a456-426655440000", # < exemple project's Amazon Resource Name (ARN)
expiresInSeconds=300,
)
desired_capabilities = DesiredCapabilities.CHROME
desired_capabilities["platform"] = "windows"
driver = webdriver.Remote(
testgrid_url_response["url"], desired_capabilities, options=chrome_options,
seleniumwire_options={'auto_config': False, 'addr': '127.0.0.1', 'port': 8087} # < Here I've tried 0.0.0.0, IP of Jenkins machine, etc...
)
driver.set_window_size(1920, 1080)
driver.implicitly_wait(30)
def interceptor(request):
request.headers['x-abc-abcdef'] = 'the-header-value'
driver.request_interceptor = interceptor
driver.get("...")
Plus, locally the 'interceptor' function with header work just fine in granting me access.
If anybody could throw some light here I'd be immensely grateful!
Thanks!
Related
I am getting the following error when trying to use pyvmomi to get a list of VMs from the vcenter server appliance.
pyVmomi.VmomiSupport.vim.fault.NoPermission: (vim.fault.NoPermission) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'Permission to perform this operation was denied.',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
object = 'vim.Folder:group-d1',
privilegeId = 'System.View',
missingPrivileges = (vim.fault.NoPermission.EntityPrivileges) [
(vim.fault.NoPermission.EntityPrivileges) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
entity = 'vim.Folder:group-d1',
privilegeIds = (str) [
'System.View'
]
}
]
}
This is my python code :
import atexit
import ssl
from pyVim import connect
from pyVmomi import vim
import pdb
def vconnect(hostIP,port=None):
if (True):
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE # disable our certificate checking for lab
else:
context = ssl.create_default_context()
context.options |= ssl.OP_NO_TLSv1_3
#cipher = 'DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-GCM-SHA256'
#context.set_ciphers(cipher)
pdb.set_trace()
if (port):
service_instance = connect.SmartConnect(host=str(hostIP), # build python connection to vSphere
user="root",
pwd="HagsLoff#1324",
port=port,
sslContext=context)
else:
service_instance = connect.SmartConnect(host=str(hostIP), # build python connection to vSphere
user="root",
pwd="HagsLoff#1324",
sslContext=context)
atexit.register(connect.Disconnect, service_instance) # build disconnect logic
content = service_instance.RetrieveContent()
container = content.rootFolder # starting point to look into
viewType = [vim.VirtualMachine] # object types to look for
recursive = True # whether we should look into it recursively
containerView = content.viewManager.CreateContainerView(container, viewType, recursive) # create container view
children = containerView.view
for child in children: # for each statement to iterate all names of VMs in the environment
summary = child.summary
print(summary.config.name)
# connecting to ESX host
vconnect("192.168.160.160")
# connecting to vcsa VM
vconnect("192.168.160.170", 443)
So I am using a nested ESX that runs on my workstation 16. I have deployed the vcsa on this ESX host via the windows CLI installer. Querying the ESX host works fine whereas querying the vcenter server appliance (vcsa) gives me the above error.
I looked at this discussion which talks about setting 'global permissions'; however on my vcenter server management VM, my 'administration' tab does not look anything like this:
What it instead looks like is this:
So apparently I have a 'vcenter server management' appliance and not what is referred to as the 'vsphere client'.
So with this context set, I have some questions:
Is the error above due to my trial license?
How is the 'vcenter server management (vcsa)' appliance different from the 'vsphere client'?
Is it possible to change 'global permissions' on the vcsa or do I need to get the 'vsphere client' to do that?
I tried adding the default port (443) as mentioned here to no avail. Keen to hear from you soon
I use pythonanywhere for my IoT flask server. My MQTT code runs locally (visualstudio) but fails under pythonanywhere (code below copied from my similar question on pythonanywhere forum).
I configured MQTT credentials and set tls to false and 5 seconds keepalive then I instantiate mqtt=Mqtt(app). In #mqtt.onConnect() function I printed values when rc is zero but it doesn't print in any log file (even I used sys.stderr) meaning that means it doesn't go in this function, therefore publish and onMessage() doesn't work too.
However if I open python in the bash console and import MQTT instance from app file it connects to my broker and when I call the function that publish from the console it publish the message to my broker.
I tried to mqtt=Mqtt() then in main scope mqtt.init_app(app) and also tried in main mqtt.run() didn't work.
app1.config['MQTT_BROKER_URL'] = 'mybroker'
app1.config['MQTT_BROKER_PORT'] = 1883
app1.config['MQTT_USERNAME'] = ' '
app1.config['MQTT_PASSWORD'] = ' '
app1.config['MQTT_KEEPALIVE'] = 5
app1.config['MQTT_TLS_ENABLED'] = False
mqtt_client = Mqtt(app1)
import sys as syss
#mqtt_client.on_connect()
def handle_connect(client,userdata,flags,rc):
if rc == 0:
print('Connected successfully', file = syss.stderr)
mqtt_client.subscribe('esp/copra')
else:
print('Bad connection. Code:', rc , file=syss.stderr)
testmqtt="a"
#mqtt_client.on_message()
def handle_mqtt_message(client,userdata,message):
payload = message.payload.decode()
print("payload is " + payload)
#app1.route("/testmqtt")
def testmqtt():
print("test mqtt here")
mqtt_client.publish('esp/copra',"pythonanywhere")
return{"mqtt": "mqtt"}
if name == "main":
#app.config['SESSION_TYPE'] = 'filesystem'
mqtt_client.init_app(app1)
app1.run()
I have a lot of data that I want to send to aws elasticsearch. by looking at the https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-upload-data.html aws website it uses curl -Xput However I want to use python to do this therefore I've looked into boto3 documentation but cannot find a way to input data.
https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/es.html I cannot see any method that inserts data.
This seems very basic job. Any help?
You can send the data to elastic search using HTTP interface. Here is the code sourced from
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html
from requests_aws4auth import AWS4Auth
import boto3
host = '' # For example, my-test-domain.us-east-1.es.amazonaws.com
region = '' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
document = {
"title": "Moneyball",
"director": "Bennett Miller",
"year": "2011"
}
es.index(index="movies", doc_type="_doc", id="5", body=document)
print(es.get(index="movies", doc_type="_doc", id="5"))
EDIT
To confirm whether data is pushed to the elastic cache under your index, you can try to do an HTTP GET by replacing the domain and index name
search-my-domain.us-west-1.es.amazonaws.com/_search?q=movies
Hi I am new in python and I am exploring pyvmomi. Here I want to fetch vm info.Like I have one data center i.e "DataCenter1"
In that data center there are two folders LinuxServer and WindowsServer these folder contains vms.So I want to fetch vm name with their respective folder names
DataCenter1
|
|----LinuxServer
| |---RHEL-VM
| |---Ubuntu-VM
|
|----WindowsServer
| |---win2k12r2-VM
| |---win2k8r2-VM
My code:
from pyvim.connect import SmartConnect, Disconnect
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.verify_mode = ssl.CERT_NONE
connect = SmartConnect(host="172.0.0.0",user="root",pwd="****",port=int("443"),sslContext=context)
datacenter = connect.content.rootFolder.childEntity[0]
print (datacenter)
vms = datacenter.vmFolder.childEntity
for i in vms:
print(i.name)
#Here I want to fetch vm name and their respective folder names
Disconnect(c)
Here I am able to fetch all vm names but I want to fetch folder name of respective vm.
Is there any method ?
Can you please guide me.
Here you will get parent name of that vm means i.e your folder name if it exist.
from pyvim.connect import SmartConnect, Disconnect
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.verify_mode = ssl.CERT_NONE
connect = SmartConnect(host="172.0.0.0",user="root",pwd="****",port=int("443"),sslContext=context)
datacenter = connect.content.rootFolder.childEntity[0]
print (datacenter)
vms = datacenter.vmFolder.childEntity
for vm in vms:
print(vm.parent.name)
Disconnect(c)
I use python3.6, full example below. It implement login vsphere and print every virtual machine name.
#!/usr/bin/env python3.6
# encoding: utf-8
from pyVim import connect
import ssl
def login():
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
si = connect.SmartConnect(host='192.168.0.1', user='root', pwd='password',
sslContext=ssl_context)
print(si)
print('\nHello World!\n')
print('If you got here, you authenticted into vCenter.')
data_center = si.content.rootFolder.childEntity[0]
vms = data_center.vmFolder.childEntity
for vm in vms:
print(vm.name)
if __name__ == '__main__':
login()
result:
'vim.ServiceInstance:ServiceInstance'
Hello World!
If you got here, you authenticted into vCenter.
sclautoesxd12v03
sclautoesxd12v04
sclautoesxd12v07
sclautoesxd12v09
sclautoesxd12v11
sclautoesxd12v12
sclautoesxd12v13
sclautoesxd12v16
sclautoesxd12v17
sclautoesxd12v01
sclautoesxd12v02
sclautoesxd12v05
sclautoesxd12v06
sclautoesxd12v08
sclautoesxd12v10
sclautoesxd12v14
sclautoesxd12v15
I have a local development django setup with apache. The problem is that on the deployment server there is no proxy while at my workplace I work behind a http proxy, hence the request calls fail.
Is there any way of making all calls from requests library go via proxy. [ I know how to add proxy to individual calls using the proxies parameter but is there a global solution ? ]
I got the same error reported by AmrFouad. At last, it fixed by updating wsgi.py as follows:
os.environ['http_proxy'] = "http://proxy.xxx:8080"
os.environ['https_proxy'] = "http://proxy.xxx:8080"
Add following lines in your wsgi file.
import os
http_proxy = "10.10.1.10:3128"
https_proxy = "10.10.1.11:1080"
ftp_proxy = "10.10.1.10:3128"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
os.environ["PROXIES"] = proxyDict
And Now you can use this environment variable anywhere you want,
r = requests.get(url, headers=headers, proxies=os.environ.get("PROXIES"))
P.S. - You should have a look at following links
Official Python Documentation for Environment Variables
Where and how do I set an environmental variable using mod-wsgi and django?
Python ENVIRONMENT variables
UPDATE 1
You can do something like following so that proxy settings are only being used on localhost.
import socket
if socket.gethostname() == "localhost":
# do something only on local server, e.g. setting os.environ["PROXIES"]
os.environ["PROXIES"] = proxyDict
else:
# Set os.environ["PROXIES"] to an empty dictionary on other hosts
os.environ["PROXIES"] = {}