Could not connect to the endpoint URL in Cost Explorer - amazon-web-services

Im trying to use CostExplorer to estimate charges, filtered by TagName.
time_period = {'Start':'2017-12-18', 'End':'2017-12-19'}
filters = {
"And":
[{
"Tags": {
"Key": "TagName",
"Values": ["Test1"]
}
}]
}
print aws.get_cost_and_usage(TimePeriod=time_period, Granularity='DAILY', Metrics=['BlendedCost'], Filter=filters)
By requesting the cost of any of my machines (Ireland), it shows an error that it is not possible to connect to ce.eu-west-1.amazonaws.com
Traceback (most recent call last):
File "test.py", line 22, in <module>
print aws.service.cloudwatch.client.get_cost_and_usage(TimePeriod=time_period, Granularity='DAILY', Metrics=['BlendedCost'], Filter=filters)
File "/usr/local/lib/python2.7/dist-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ce.eu-west-1.amazonaws.com/"
Maybe this service is not available in Ireland yet?
I cannot find "Cost explorer" / "Billing" / "Cost management" here:
http://docs.aws.amazon.com/general/latest/gr/rande.html#awssupport_region
I'm using:
boto3==1.5.2
botocore==1.8.16

The Cost Explorer service is deployed in us-east-1.
All of your queries must be directed to that region, i.e.:
client = boto3.client('ce', region_name='us-east-1')
client.get_cost_and_usage(....)
Response will include all your regions.
Notice the AWS UI also mentions 'Global' when you navigate to billing console.

Related

Creating Connection for RedshiftDataOperator

So i when to the airflow documentation for aws redshift there is 2 operator that can execute the sql query they are RedshiftSQLOperator and RedshiftDataOperator. I already implemented my job using RedshiftSQLOperator but i want to do it using RedshiftDataOperator instead, because i dont want to using postgres connection in RedshiftSQLOperator but AWS API.
RedshiftDataOperator Documentation
I had read this documentation there is aws_conn_id in the parameter. But when im trying to use the same connection id there is error.
[2023-01-11, 04:55:56 UTC] {base.py:68} INFO - Using connection ID 'redshift_default' for task execution.
[2023-01-11, 04:55:56 UTC] {base_aws.py:206} INFO - Credentials retrieved from login
[2023-01-11, 04:55:56 UTC] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/redshift_data.py", line 146, in execute
self.statement_id = self.execute_query()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/redshift_data.py", line 124, in execute_query
resp = self.hook.conn.execute_statement(**filter_values)
File "/home/airflow/.local/lib/python3.7/site-packages/botocore/client.py", line 415, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/botocore/client.py", line 745, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the ExecuteStatement operation: The security token included in the request is invalid.
From task id
redshift_data_task = RedshiftDataOperator(
task_id='redshift_data_task',
database='rds',
region='ap-southeast-1',
aws_conn_id='redshift_default',
sql="""
call some_procedure();
"""
)
What should i fill in the airflow connection ? Because in the documentation there is no example of value that i should fill to airflow. Thanks
Airflow RedshiftDataOperator Connection Required Value
Have you tried using the Amazon Redshift connection? There is both an option for authenticating using your Redshift credentials:
Connection ID: redshift_default
Connection Type: Amazon Redshift
Host: <your-redshift-endpoint> (for example, redshift-cluster-1.123456789.us-west-1.redshift.amazonaws.com)
Schema: <your-redshift-database> (for example, dev, test, prod, etc.)
Login: <your-redshift-username> (for example, awsuser)
Password: <your-redshift-password>
Port: <your-redshift-port> (for example, 5439)
(source)
and an option for using an IAM role (there is an example in the first link).
Disclaimer: I work at Astronomer :)
EDIT: Tested the following with Airflow 2.5.0 and Amazon provider 6.2.0:
Added the IP of my Airflow instance to the VPC security group with "All traffic" access.
Airflow Connection with the connection id aws_default, Connection type "Amazon Web Services", extra: { "aws_access_key_id": "<your-access-key-id>", "aws_secret_access_key": "<your-secret-access-key>", "region_name": "<your-region-name>" }. All other fields blank. I used a root key for my toy-aws. If you use other credentials you need to make sure that IAM role has access and the right permissions to the Redshift cluster (there is a list in the link above).
Operator code:
red = RedshiftDataOperator(
task_id="red",
database="dev",
sql="SELECT * FROM dev.public.users LIMIT 5;",
cluster_identifier="redshift-cluster-1",
db_user="awsuser",
aws_conn_id="aws_default"
)

EndpointConnectionError: Could not connect to the endpoint URL: "http://169.254.169.254/....."

I am trying to create AWS RDS and deploy lambda function using a python script. However, I am getting below error, looks like it is unable to communicate with the aws commands to create rds.
DEBUG: Caught retryable HTTP exception while making metadata service request to http://169.254.169.254/latest/meta-data/iam/security-credentials/: Could not connect to the endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/botocore/utils.py", line 303, in _get_request
response = self._session.send(request.prepare())
File "/usr/lib/python2.7/site-packages/botocore/httpsession.py", line 282, in send raise EndpointConnectionError(endpoint_url=request.url, error=e)
EndpointConnectionError: Could not connect to the endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
I am getting the aws credentials through SSO okta. In the ~/.aws directory,below are the contents of 'credentials' and 'config' file respectively.
[default]
aws_access_key_id = <Key Id>
aws_secret_access_key = <Secret Key>
aws_session_token = <Token>
[default]
region = us-west-2
```python
```
for az in availability_zones:
if aurora.get_db_instance(db_instance_identifier + "-" + az)[0] != 0:
aurora.create_db_instance(db_cluster_identifier, db_instance_identifier + "-" + az, az, subnet_group_identifier, db_instance_type)
else:
aurora.modify_db_instance(db_cluster_identifier, db_instance_identifier + "-" + az, az, db_instance_type)
# Wait for DB to become available for connection
iter_max = 15
iteration = 0
for az in availability_zones:
while aurora.get_db_instance(db_instance_identifier + "-" + az)[1]["DBInstances"][0]["DBInstanceStatus"] != "available":
iteration += 1
if iteration < iter_max:
logging.info("Waiting for DB instances to become available - iteration " + str(iteration) + " of " + str(iter_max))
time.sleep(10*iteration)
else:
raise Exception("Waiting for DB Instance to become available timed out!")
cluster_endpoint = aurora.get_db_cluster(db_cluster_identifier)[1]["DBClusters"][0]["Endpoint"]
The actual error below, coming from the while loop, DEBUG shows unable to locate credential, but the credential is there. I can deploy an Elastic Beanstalk environment from cli using the same aws credential, but not this. Looks like the above aurora.create_db_instance command failed.
DEBUG: Unable to locate credentials
Traceback (most recent call last):
File "./deploy_api.py", line 753, in <module> sync_rds()
File "./deploy_api.py", line 57, in sync_rds
while aurora.get_db_instance(db_instance_identifier + "-" + az)[1]["DBInstances"][0]["DBInstanceStatus"] != "available":
TypeError: 'NoneType' object has no attribute '__getitem__'
I had this error because an ECS task didn't have permissions to write to DynamoDB. The code causing the problem was:
from boto3 import resource
dynamodb_resource = resource("dynamodb")
The problem was resolved when I filled in the region_name, aws_access_key_id and aws_secret_access_key parameters for the resource() function call.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session.resource
If this doesn't solve your problem then check your code that connects to AWS services and make sure that you are filling in all of the proper function parameters.

Salt masters behind ELB have flaky connection to minions

I am running the following setup at AWS:
Elastic Loadbalancer in front of two EC2 machines (Amazon Linux) with a docker container that the salt-master runs in
Two EC2 instances with salt-minions installed
The 'master' value in the minion config is set to the dns of the loadbalancer (SaltMaster-env-vpc-test.szfegmankg.us-east-1.elasticbeanstalk.com)
The ELB accepts all traffic from the minions
The Salt-masters accept all traffic from the ELB as well as from the minions
The Salt-masters PKI Folder is shared between the two masters
The Salt-masters have the same private+public keys
The Salt-masters run on 2017.7.1
The Salt-minions run on 2016.11.5 (I tried it with 2017.7.1, but got the same results)
The Salt-minions accept all traffic from the ELB as well as from the masters
The master config looks as follows:
open_mode: True
worker_threads: 20
auto_accept: True
log_level: error
log_level_logfile: debug
extension_modules: srv/salt/ext
rest_cherrypy:
port: 8000
disable_ssl: True
debug: True
external_auth:
pam:
saltdev:
- .*
- '#runner'
# Setting the job_cache to redis.
# The redis config settings are generated at the start of the docker container and
# will be written into /etc/salt/master.d/redis.conf
master_job_cache: redis
cache: redis
pki_dir: /etc/salt/pki/master/efs
The minion config looks as follows:
id: WIN-AB3GO7BJ72I
log_file: C:\salt.log
multiprocessing: False
log_level_logfile: debug
pki_dir: /conf/pki/minion
master: SaltMaster-env-vpc-test.szfegmankg.us-east-1.elasticbeanstalk.com
master_type: str
master_alive_interval: 30
open_mode: True
root_dir: c:\salt
ipc_mode: tcp
recon_default: 1000
recon_max: 199000
recon_randomize: True
In the master log files, I can see on both masters:
2017-09-05 10:06:18,118 [salt.utils.verify][DEBUG ][35] This salt-master instance has accepted 2 minion keys.
A salt-key -L on both masters yield the same result:
Accepted Keys:
WIN-AB3GO7BJ72I
WIN-EDMP9VB716B
Denied Keys:
Unaccepted Keys:
Rejected Keys:
So it looks like all is fine and everything should work. However, a test.ping is extremely flaky. Sometimes it works, but most of the time it doesnt.
Most of the time neither master gets any return from the minion and on the minion side I can see in the log that the minion never receives the message to execute 'test.ping' from the master.
Example 1:
test.ping from Master1:
root#d7383ff8f8bf:/# salt 'WIN-EDMP9VB716B' test.ping
[ERROR ] Exception raised when processing __virtual__ function for salt.loaded.int.cache.consul. Module will not be loaded: 'module' object has no attribute 'Consul'
[ERROR ] An un-handled exception was caught by salt's global exception handler:
KeyError: 'redis.ls'
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 476, in salt_main
client.run()
File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 173, in run
for full_ret in cmd_func(**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 805, in cmd_cli
**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1597, in get_cli_event_returns
connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 577, in connected_ids
search = self.cache.ls('minions')
File "/usr/lib/python2.7/dist-packages/salt/cache/__init__.py", line 244, in ls
return self.modules[fun](bank, **self._kwargs)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1113, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 101, in __getitem__
raise KeyError(key)
KeyError: 'redis.ls'
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 476, in salt_main
client.run()
File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 173, in run
for full_ret in cmd_func(**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 805, in cmd_cli
**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1597, in get_cli_event_returns
connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 577, in connected_ids
search = self.cache.ls('minions')
File "/usr/lib/python2.7/dist-packages/salt/cache/__init__.py", line 244, in ls
return self.modules[fun](bank, **self._kwargs)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1113, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 101, in __getitem__
raise KeyError(key)
KeyError: 'redis.ls'
I am aware that the redis error will be fixed soon https://github.com/saltstack/salt/issues/43295
Example 2:
test.ping from Master1, ~ 1 Minute after Example 1:
root#d7383ff8f8bf:/# salt 'WIN-EDMP9VB716B' test.ping
WIN-EDMP9VB716B:
True
Also during my tests, a test.ping from Master2 never succeeded.
I would like to know if there is some flaw in my setup that I am not seeing, or if Salt only works with an HA Proxy as an ELB?
Or maybe Salt doesn't work at all behind an ELB?
See https://github.com/saltstack/salt/issues/43368 for more answers.
TL;DR
Because there is no session stickyness for TCP connections, it is currently not possible to work with a saltmaster that is behind an ELB, if you use the ELB's ip/name as an entrypoint.

Hyperledger chaincode does not get current user metadata

Currently I'm working with Hyperledger chaincode and trying to get at least any info regarding current user who invokes/queries chaincode. For some reason chaincode example asset_management.go results in an error "ERRO 031 Got error: Invalid admin certificate. Empty." I have security.enabled and security.privacy set to true and Membership services running. I've enrolled "admin".
Here are the lines in the code where it happens
// Set the admin
// The metadata will contain the certificate of the administrator
adminCert, err := stub.GetCallerMetadata()
if err != nil {
myLogger.Debug("Failed getting metadata")
return nil, errors.New("Failed getting metadata.")
}
if len(adminCert) == 0 {
myLogger.Debug("Invalid admin certificate. Empty.")
return nil, errors.New("Invalid admin certificate. Empty.")
}
Do you have any ideas how to make the chaincode return any data for stub.GetCallerMetadata() ?
"Metadata" should be provided in your deploy command, an example of "deploy" for asset_management_with_roles:
curl -XPOST -d ‘{“jsonrpc": "2.0", "method": "deploy", "params": {"type": 1,"chaincodeID": {"path": "github.com/hyperledger/fabric/examples/chaincode/go/asset_management_with_roles","language": "GOLANG"}, "ctorMsg": { "args": ["init"] }, "metadata":[97, 115, 115, 105, 103, 110, 101, 114] ,"secureContext": "assigner"} ,"id": 0}' http://localhost:7050/chaincode
In this command "metadata" contains utf-8 encoded string “assigner”. This string will be saved in a ledger and only user with such role will be able to execute “assign” function in smart contract.
"asset_management" example expects that you will provide certificate in metadata field. In order to obtain certificate you can use step 9 described in related question: How is running the asset_management.go different from running a simple chaincode like chaincode_example02.go

Selenium Grid WebDriver Unable to create new remote session desired capabilities

I am trying out Selenium Grid. My tests are written in Selenium Python.
I have started the Grid hub on my local machine, I have registered the node for IE using a json file on the same machine.
I run a selenium sample test and I get the following error:
Unable to create new remote session desired capabilities
Full error trace:
Traceback (most recent call last):
File "E:\RL Fusion\projects\Selenium Grid\Selenium Grid Sample\Test1 working - try json config file 2\Test1.py", line 21, in setUp
desired_capabilities=desired_cap)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 89, in __init__
self.start_session(desired_capabilities, browser_profile)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 138, in start_session
'desiredCapabilities': desired_capabilities,
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 195, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 170, in check_response
raise exception_class(message, screen, stacktrace)
WebDriverException: Message: Unable to create new remote session. desired capabilities = Capabilities [{browserName=internet explorer, javascriptEnabled=true, platform=WINDOWS}], required capabilities = null
Build info: version: '3.0.0-beta3', revision: 'c7b525d', time: '2016-09-01 14:57:03 -0700'
System info: host: 'OptimusPrime-PC', ip: '192.168.0.2', os.name: 'Windows 8.1', os.arch: 'x86', os.version: '6.3', java.version: '1.8.0_31'
Driver info: driver.version: InternetExplorerDriver
Stacktrace:
at org.openqa.selenium.remote.ProtocolHandshake.createSession (ProtocolHandshake.java:80)
at org.openqa.selenium.remote.HttpCommandExecutor.execute (HttpCommandExecutor.java:141)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute (DriverCommandExecutor.java:82)
at org.openqa.selenium.remote.RemoteWebDriver.execute (RemoteWebDriver.java:597)
at org.openqa.selenium.remote.RemoteWebDriver.startSession (RemoteWebDriver.java:242)
at org.openqa.selenium.remote.RemoteWebDriver.startSession (RemoteWebDriver.java:228)
at org.openqa.selenium.ie.InternetExplorerDriver.run (InternetExplorerDriver.java:180)
at org.openqa.selenium.ie.InternetExplorerDriver.<init> (InternetExplorerDriver.java:172)
at org.openqa.selenium.ie.InternetExplorerDriver.<init> (InternetExplorerDriver.java:148)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (None:-2)
at sun.reflect.NativeConstructorAccessorImpl.newInstance (None:-1)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance (None:-1)
at java.lang.reflect.Constructor.newInstance (None:-1)
at org.openqa.selenium.remote.server.DefaultDriverProvider.callConstructor (DefaultDriverProvider.java:103)
at org.openqa.selenium.remote.server.DefaultDriverProvider.newInstance (DefaultDriverProvider.java:97)
at org.openqa.selenium.remote.server.DefaultDriverFactory.newInstance (DefaultDriverFactory.java:60)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:222)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:209)
at java.util.concurrent.FutureTask.run (None:-1)
at org.openqa.selenium.remote.server.DefaultSession$1.run (DefaultSession.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker (None:-1)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (None:-1)
at java.lang.Thread.run (None:-1)
json.cfg.json config implementation:
{
"class": "org.openqa.grid.common.RegistrationRequest",
"capabilities": [
{
"seleniumProtocol": "WebDriver",
"browserName": "internet explorer",
"version": "11",
"maxInstances": 1,
"platform" : "WIN7" }
],
"configuration" : {
"port": 5555,
"register": true,
"host": "192.168.0.6",
"proxy": "org.openqa.grid.selenium.proxy. DefaultRemoteProxy",
"maxSession": 2,
"hubHost": "192.168.0.6",
"role": "webdriver",
"registerCycle": 5000,
"hub": "http://192.168.0.6:4444/grid/register",
"hubPort": 4444,
"remoteHost": "http://localhost:4444"
}
}
the setup method in my Selenium Python file is:
def setUp(self):
desired_cap = {'browserName': 'internet explorer',
#'platform': 'WIN8_1',
'platform': 'WIN7',
'javascriptEnabled': True}
self.driver = webdriver.Remote(
command_executor='http://localhost:4444/wd/hub',
desired_capabilities=desired_cap)
What am i doing wrong? Is my desired Capabilities not configured properly?
I notice in the full trace log it says Win 8.1
I have mentioned Win7 for the platform. I do not know why it is trying for Win 8.1
I have now changed desired capabilities to the following:
desired_cap = {'browserName': 'internet explorer',
'platform': 'windows',
'javascriptEnabled': True,
'InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS': True
}
I now get the error:
WebDriverException: Message: Error forwarding the new session cannot find : Capabilities [{browserName=internet explorer, InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS=true, javascriptEnabled=true, platform=XP}]
I need some help please.
Thanks, Riaz
The Grid uses the below three attributes in its DefaultCapabilitiesMatcher to decide on which node should a new session request be routed to :
Platform
BrowserType
Browser version
In your case, based on what you changed, your test is requesting that a node that has IE running on Windows, but in your nodeConfig.json you have basically specified "WIN7".
I dont think specifying "WINDOWS" will work for you. You can try changing your desired capabilities to refer to WIN7 and that should work.
Just keep the setting for Security of IE with middle (middle to high) and enable protected Mode for all.
Then issue got resolved.