I'm trying to retrieve the status of my Cloudfront distributions through boto3 but it seems like the status returned through get_distribution or list_distributions only returns the state instead (Deployed or in-progress).
enter image description here
{
"ResponseMetadata":{
"RequestId":"bacc7917-90b4-4f91-8915-5dc7201b179a",
"HTTPStatusCode":200,
"HTTPHeaders":{
"x-amzn-requestid":"bacc7917-90b4-4f91-8915-5dc7201b179a",
"etag":"ECIVXNE16EKWC",
"content-type":"text/xml",
"content-length":"3102",
"date":"Thu 02 Feb 2023 21:33:47 GMT"
},
"RetryAttempts":0
},
"ETag":"ECIVXNE16EKWC",
"Distribution":{
"Id":"E2H2PR2OHJ17TC",
"ARN":"arn:aws:cloudfront::556730911179:distribution/E2H2PR2OHJ17TC",
"Status":"Deployed",
"LastModifiedTime":"datetime.datetime(2023, 2, 2, 21, 29, 35, 959000, tzinfo=tzutc())",
"InProgressInvalidationBatches":0,
"DomainName":"dx4o38vn878h1.cloudfront.net",
"ActiveTrustedSigners":{
"Enabled":false,
"Quantity":0
Anyone know a way to return the status (enabled / disabled) of a cloudfront distribution through boto3 ?
I tried returning the output through get_distribution and list_distributions
Documentation & API for config mentions 'Enabled' field but also says it is to enable/disable the distribution. Might worth to try inspecting get_distribution_config to see if Enabled is coming back in the results.
Related
I followed the installation guide for Devstack according to this https://docs.openstack.org/devstack/latest/ and then followed this to configure the keystoneauth middleware https://docs.openstack.org/swift/latest/overview_auth.html#keystone-auth
But when I tried to list bucket using boto3 with credentials I generate from OpenStack ec2 credential create, I got the error "The AWS Access Key Id you provided does not exist in our records"
Would appreciate any help
My boto3 code is
import boto3 s3 = boto3.resource('s3',aws_access_key_id='5d14869948294bb48f9bfe684b8892ca',aws_secret_access_key='ffcbcec69fb54622a0185a5848d7d0d2',)
for bucket in s3.objects.all():
print(bucket)
Where the 2 keys are according to below:
| access | 5d14869948294bb48f9bfe684b8892ca|
| links | {'self': '10.180.205.202/identity/v3/users/…'} |
| project_id | c128ad4f9a154a04832e41a43756f47d |
| secret | ffcbcec69fb54622a0185a5848d7d0d2 |
| trust_id | None |
| user_id | 2abd57c56867482ca6cae5a9a2afda29
After running the commands #larsks provided, I got public: http://10.180.205.202:8080/v1/AUTH_ed6bbefe5ab44f32b4891fc5e3e55f1f for my swift endpoint. And just making sure, my ec2 credential is under the user admin and also project admin.
When I followed the Boto3 code and removed everything starting from v1 for my endpoint I got the error botocore.exceptions.ClientError: An error occurred () when calling the ListBuckets operation:
And when I kept the AUTH part, I got botocore.exceptions.ClientError: An error occurred (412) when calling the ListBuckets operation: Precondition Failed
The previous problem is resolved by adding enable_service s3api in the local.conf and stack again. This is likely because OpenStack needs to know it's using s3api, from the documentation it says Swift will be configured to act as a S3 endpoint for Keystone so effectively replacing the nova-objectstore.
Your problem is probably that nowhere are you telling boto3 how to connect to your OpenStack environment, so by default it is trying to connect to Amazon's S3 service (in your example you're also not passing in your access key and secret key, but I'm assuming this was just a typo when creating your example).
If you want to connect to the OpenStack object storage service, you'll need to first get the endpoint for that service from the catalog. You can get this from the command line by running openstack catalog list; you can also retrieve it programatically if you make use of the openstack Python module.
You can just inspect the output of openstack catalog list and look for the swift service, or you can parse it out using e.g. jq:
$ openstack catalog list -f json |
jq -r '.[]|select(.Name == "swift")|.Endpoints[]|select(.interface == "public")|.url'
https://someurl.example.com/swift/v1
In any case, you need to pass the endpoint to boto3:
>>> import boto3
>>> session = boto3.session.Session()
>>> s3 = session.client(service_name='s3',
... aws_access_key_id='access_key_id_goes_here',
... aws_secret_access_key='secret_key_goes_here',
... endpoint_url='endpoint_url_goes_here')
>>> s3.list_buckets()
{'ResponseMetadata': {'RequestId': 'tx0000000000000000d6a8c-0060de01e2-cff1383c-default', 'HostId': '', 'HTTPStatusCode': 200, 'HTTPHeaders': {'transfer-encoding': 'chunked', 'x-amz-request-id': 'tx0000000000000000d6a8c-0060de01e2-cff1383c-default', 'content-type': 'application/xml', 'date': 'Thu, 01 Jul 2021 17:56:51 GMT', 'connection': 'close', 'strict-transport-security': 'max-age=16000000; includeSubDomains; preload;'}, 'RetryAttempts': 0}, 'Buckets': [{'Name': 'larstest', 'CreationDate': datetime.datetime(2018, 12, 5, 0, 20, 19, 4000, tzinfo=tzutc())}, {'Name': 'larstest2', 'CreationDate': datetime.datetime(2019, 3, 7, 21, 4, 12, 628000, tzinfo=tzutc())}, {'Name': 'larstest4', 'CreationDate': datetime.datetime(2021, 5, 12, 18, 47, 54, 510000, tzinfo=tzutc())}], 'Owner': {'DisplayName': 'lars', 'ID': '4bb09e3a56cd451b9d260ad6c111fd96'}}
>>>
Note that if the endpoint url from openstack catalog list includes a version (e.g., .../v1), you will probably want to drop that.
I'm trying to Execute the AWS step function from API Gateway, It's working as expected.
Whenever I'm passing the input, statemachinearn(stepfunction name to execute) It's triggering the step function.
But It's still returning the status code 200, whenever it's not able to find the stepfunction, I want to return the status code 404 if the apigateway not found that stepfunction.
Could you please help me on that
Response:
Status: 200ok
Expected:
Status: 404
Thanks,
Harika.
As per the documentation StartExecution API call do return 400 Bad Request for non existent statemachine which is correct as RESTful API standard.
StateMachineDoesNotExist
The specified state machine does not exist.
HTTP Status Code: 400
From the RESTful API point of view, endpoint /execution/(which I created in API Gateway for the integration setup) is a resource, no matter it accepts GET or POST or something else. 404 is only appropriate when the resource /execution/ itself does not exist. If /execution/ endpoint exists, but its invocation failed (no matter what the reasons), the response status code must be something other than 404.
So in the case of the returned response(200) for POST call with non-existent statemachine it is correct. But when API Gateway tried to make the call to non-existent statemachine it got 404 from StartExecution api call which it eventually wrapped into a proper message instead of returning 404 http response.
curl -s -X POST -d '{"input": "{}","name": "MyExecution17","stateMachineArn": "arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine"}' https://123456asdasdas.execute-api.eu-central-1.amazonaws.com/v1/execution|jq .
{
"__type": "com.amazonaws.swf.service.v2.model#StateMachineDoesNotExist",
"message": "State Machine Does Not Exist: 'arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine1'"
}
Let's say you create another MethodResponse where you can provide an exact HTTP Status Code in your case 404 which you want to return and you do an Integration Response where you have to choose the Method Response by providing either Exact HTTP Responce Code(400 -> Upstream response from the **StartExecution** API Call) OR a Regex -> (4\{d}2) matching all the 4xx errors.
In that case you will be giving 404 for all the responses where the upstream error 4xx StartExecution Errors
ExecutionAlreadyExists -> 400
ExecutionLimitExceeded -> 400
InvalidArn -> 400
InvalidExecutionInput -> 400
InvalidName -> 400
StateMachineDeleting -> 400
StateMachineDoesNotExist -> 400
Non Existent State Machine:
curl -s -X POST -d '{"input": "{}","name": "MyExecution17","stateMachineArn": "arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine1"}' https://123456asdasdas.execute-api.eu-central-1.amazonaws.com/v1/execution|jq .
< HTTP/2 404
< date: Sat, 30 Jan 2021 14:12:16 GMT
< content-type: application/json
...
{
"__type": "com.amazonaws.swf.service.v2.model#StateMachineDoesNotExist",
"message": "State Machine Does Not Exist: 'arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine1'"
}
Execution Already Exists
curl -s -X POST -d '{"input": "{}","name": "MyExecution17","stateMachineArn": "arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine"}' https://123456asdasdas.execute-api.eu-central-1.amazonaws.com/v1/execution|jq .
* We are completely uploaded and fine
< HTTP/2 404
< date: Sat, 30 Jan 2021 14:28:27 GMT
< content-type: application/json
{
"__type": "com.amazonaws.swf.service.v2.model#ExecutionAlreadyExists",
"message": "Execution Already Exists: 'arn:aws:states:eu-central-1:1234567890:execution:mystatemachine:MyExecution17'"
}
Which I think will be misleading.
I'm trying AutoML Vision of ML Codelabs on Cloud Healthcare API GitHub tutorials.
https://github.com/GoogleCloudPlatform/healthcare/blob/master/imaging/ml_codelab/breast_density_auto_ml.ipynb
I run the Export DICOM data cell code of Convert DICOM to JPEG section and the request as well as all the premise cell code succeeded.
But waiting for operation completion is timed out and never finish.
(ExportDicomData request status on Dataset page stays "Running" over the day. I did many times but all the requests were stacked staying "Running". A few times I tried to do from scratch and the results were same.)
I did so far:
1) Remove "output_config" since INVALID ARGUMENT error occurs.
https://github.com/GoogleCloudPlatform/healthcare/issues/133
2) Enable Cloud Resource Manager API since it is needed.
This is the cell code.
# Path to export DICOM data.
dicom_store_url = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores', dicom_store_id)
path = dicom_store_url + ":export"
# Headers (send request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
# output_config = {'output_config': {'gcs_destination': {'uri_prefix': jpeg_folder, 'mime_type': 'image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50'}}}
output_config = {'gcs_destination': {'uri_prefix': jpeg_folder, 'mime_type': 'image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50'}}
body = json.dumps(output_config)
resp, content = http.request(path, method='POST', headers=headers, body=body)
assert resp.status == 200, 'error exporting to JPEG, code: {0}, response: {1}'.format(resp.status, content)
print('Full response:\n{0}'.format(content))
# Record operation_name so we can poll for it later.
response = json.loads(content)
operation_name = response['name']
This is the result of waiting.
Waiting for operation completion...
Full response:
{
"name": "projects/my-datalab-tutorials/locations/us-central1/datasets/sample-dataset/operations/18300485449992372225",
"metadata": {
"#type": "type.googleapis.com/google.cloud.healthcare.v1beta1.OperationMetadata",
"apiMethodName": "google.cloud.healthcare.v1beta1.dicom.DicomService.ExportDicomData",
"createTime": "2019-08-18T10:37:49.809136Z"
}
}
AssertionErrorTraceback (most recent call last)
<ipython-input-18-1a57fd38ea96> in <module>()
21 timeout = time.time() + 10*60 # Wait up to 10 minutes.
22 path = os.path.join(HEALTHCARE_API_URL, operation_name)
---> 23 _ = wait_for_operation_completion(path, timeout)
<ipython-input-18-1a57fd38ea96> in wait_for_operation_completion(path, timeout)
15
16 print('Full response:\n{0}'.format(content))
---> 17 assert success, "operation did not complete successfully in time limit"
18 print('Success!')
19 return response
AssertionError: operation did not complete successfully in time limit
API Version is v1beta1.
I was wondering if somebody has any suggestion.
Thank you.
After several times kept trying and stayed running one night, it finally succeeded. I don't know why.
There was a recent update to the codelabs. The error message is due to the timeout in the codelab and not the actual operation. This has been addressed in the update. Please let me know if you are still running into any issues!
I have below setup which I am trying to run.
I have a python app which is running locally on my linux host.
I am using boto3 to connect to AWS with my user secret key and secret key Id.
My user had full access to EC2, Cloudwatch, S3 and config
My application invokes a lamdbda function called mylambda.
The execution role for mylambda also has all the required permissions.
Now if i call my lambda function from aws console it works fine. I can see the logs of execution in cloudwatch. But if I do it from my linux box from my custom application, I dont see any execution logs, I am not getting error either.
is there anything I am missing ?
Any help is really appreciated.
I dont see it getting invoked. But surprisingly I am getting response as below.
gaurav#random:~/lambda_s3$ python main.py
{u'Payload': <botocore.response.StreamingBody object at 0x7f74cb7f5550>, u'ExecutedVersion': '$LATEST', 'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': '7417534c-6263-11e8-xxx-afab1667510a', 'HTTPHeaders': {'x-amzn-requestid': '7417534c-xxx-11e8-8a24-afab1667510a', 'content-length': '4', 'x-amz-executed-version': '$LATEST', 'x-amzn-trace-id': 'root=1-5b0bdc78-7559e68acd668476bxxxx754;sampled=0', 'x-amzn-remapped-content-length': '0', 'connection': 'keep-alive', 'date': 'Mon, 28 May 2018 10:39:52 GMT', 'content-type': 'application/json'}}, u'StatusCode': 200}
{u'CreationDate': datetime.datetime(2018, 5, 27, 9, 50, 9, tzinfo=tzutc()), u'Name': 'bucketname'}
gaurav#random:~/lambda_s3$
My sample app is as below
#!/usr/bin/python
import boto3
import json
import base64
d= {'key': 10, 'key2' : 20}
client = boto3.client('lambda')
response = client.invoke(
FunctionName='mylambda',
InvocationType='RequestResponse',
#LogType='None',
ClientContext=base64.b64encode(b'{"custom":{"foo":"bar", \
"fuzzy":"wuzzy"}}').decode('utf-8'),
Payload=json.dumps(d)
)
print response
Make sure that you're actually invoking the Lambda correctly. Lambda error handling can be a bit tricky. Using boto3 the invoke method doesn't necessarily throw even if the invocation fails. You have to check the statusCode property in the response.
You mentioned that your user has full access to EC2, Cloudwatch, S3, and config. For your use case, you need to add lambda:InvokeFunction to your user's permissions.
I've created a service account and furnished with private key in JSON format (/adc.json). It can be loaded into google-cloud python client via Client.from_service_account_json function fine. But when I tried call the Monitoring API to write a custom metric, it's getting 403 error like below.
In [1]: from google.cloud import monitoring
In [2]: c = monitoring.Client.from_service_account_json('/adc.json')
In [6]: resource = client.resource('gce_instance', labels={'instance_id': '1234567890123456789', 'zone': 'us-central1-f'})
In [7]: metric = client.metric(type_='custom.googleapis.com/my_metric', labels={'status': 'successful'})
In [9]: from datetime import datetime
In [10]: end_time = datetime.utcnow()
In [11]: client.write_point(metric=metric, resource=resource, value=3.14, end_time=end_time)
---------------------------------------------------------------------------
Forbidden Traceback (most recent call last)
<ipython-input-11-b030f6399aa2> in <module>()
----> 1 client.write_point(metric=metric, resource=resource, value=3.14, end_time=end_time)
/usr/local/lib/python3.5/site-packages/google/cloud/monitoring/client.py in write_point(self, metric, resource, value, end_time, start_time)
599 timeseries = self.time_series(
600 metric, resource, value, end_time, start_time)
--> 601 self.write_time_series([timeseries])
/usr/local/lib/python3.5/site-packages/google/cloud/monitoring/client.py in write_time_series(self, timeseries_list)
544 for timeseries in timeseries_list]
545 self._connection.api_request(method='POST', path=path,
--> 546 data={'timeSeries': timeseries_dict})
547
548 def write_point(self, metric, resource, value,
/usr/local/lib/python3.5/site-packages/google/cloud/_http.py in api_request(self, method, path, query_params, data, content_type, headers, api_base_url, api_version, expect_json, _target_object)
301 if not 200 <= response.status < 300:
302 raise make_exception(response, content,
--> 303 error_info=method + ' ' + url)
304
305 string_or_bytes = (six.binary_type, six.text_type)
Forbidden: 403 User is not authorized to access the project monitoring records. (POST https://monitoring.googleapis.com/v3/projects/MY-PROJECT/timeSeries/)
In the GCP's Access Control Panel, I didn't see a specific predefined role scope for Stackdriver Monitoring API. See the screenshot below:
I've tried Project Viewer, Service Account Actor predefined roles, neither worked. I am hesitatant to assigned a Project Editor role this service account because it feels like it's too broad of a scope for Stackdriver dedicated service account credential. So what should be the correct role to assign to this service account? Thanks.
You are right that it's too broad, and we are working on finer-grained roles, but, as of today, "Project Editor" is the correct role.
If you are running on a GCE VM and omit the private key, the Stackdriver monitoring agent will by default attempt to use the VM's default service account. This will work as long as the VM has the https://www.googleapis.com/auth/monitoring.write scope (this should be turned on by default for all GCE VMs these days). See this page for a detailed description of what credentials the agent needs.