I am trying to change the expiration time of a token - I mean the last line in that output:
admin#dev:~]$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/my-instance-role
{
"Code" : "Success",
"LastUpdated" : "2023-02-06T07:00:00Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "myaccesskey",
"SecretAccessKey" : "mytokenxxx",
"Token" : "xxx",
"Expiration" : "2023-02-06T13:00:00Z"
Strangely i cannot find an easy way to do that - is it possible to change that expiration time without creating new iam role ?
Related
I am trying to add SSE-C algorithm, Key and Md5 to an already working policy -
{
"expiration" : "2022-11-22T18:00:16.383Z",
"conditions" :[
{"bucket" : "<bucket>"},
{"key" : "<file path1>"},
{"x-amz-algorithm" : "AWS4-HMAC-SHA256"},
{"x-amz-credential" : "AKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request"},
{"x-amz-date" : "20221121T180016Z"},
["content-length-range", 10, 20000]
]
}
to create another policy that performs SSE-C encryption on the file that is getting uploaded -
"expiration" : "2022-11-22T18:00:16.383Z",
"conditions" :[
{"bucket" : "<bucket>"},
{"key" : "<file path2>"},
{"x-amz-algorithm" : "AWS4-HMAC-SHA256"},
{"x-amz-credential" : "AAKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request"},
{"x-amz-date" : "20221121T180016Z"},
{"x-amz-server-side-encryption-customer-algorithm" : "AES256"},
{"x-amz-server-side-encryption-customer-key" : "In3vRc+WpFCvISbI8CPbNW7OSwxlS2bcq0XY0YcpYP0="},
{"x-amz-server-side-encryption-customer-key-MD5" : "2Z32DEb90ZF370xDkf6ing=="},
["content-length-range", 10, 20000]
]
}
When I add the SSE-C related information to the policy the upload fails with the below error:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid according to Policy: Policy Condition failed: ["eq", "$x-amz-server-side-encryption-customer-algorithm", "AES256"]</Message>
<RequestId>TWZYMWT37G1TDG7D</RequestId>
<HostId>xh+2GQv90MnMGJGNt2tvjydoKCE8AIGUMrq7SniuNb15e86Hgt+jkS0X9KExR6bgoXgaMevvvf0=</HostId>
</Error>
Not sure what is wrong with the policy. In both the cases these are the headers I am including in the POST request
Can some one please help me identify what is that I am doing wrong here.
Thanks in advance.
I am trying to send a binary file and string parameters to AWS API Gateway.
this is the mapping template that is on API Gateway POST:
{
"imageFile" : $input.params('imageFile'),
"purdueUsername" : $input.params('purdueUsername'),
"description" : $input.params('description'),
"price" : $input.params('price'),
"longitude" : $input.params('longitude'),
"latitude" : $input.params('latitude'),
"category" : $input.params('category'),
}
Making a post request results in this:
When I try this
{
"imageFile" : "$input.params('imageFile')",
"purdueUsername" : "$input.params('purdueUsername')",
"description" : "$input.params('description')",
"price" : "$input.params('price')",
"longitude" : "$input.params('longitude')",
"latitude" : "$input.params('latitude')",
"category" : "$input.params('category')",
}
I am getting empty parameters. The api is not receiving the parameters I am sending through POST request.
How should I change the mapping template?
Note: When I only try to have imageFile in the mapping template and only send binary file without extra parameters it works completely fine.
{
"imageFile" : "$input.body"
}
However, I want to be able to send other parameters beside the binary file.
this is how I solved the problem. I am sending the binary file in the body of the POST request and the other parameters as a header.
this is the mapping template I put on the AWS API Gateway
{
"purdueUsername" : "$input.params('purdueUsername')",
"description" : "$input.params('description')",
"price" : "$input.params('price')",
"longitude" : "$input.params('longitude')",
"latitude" : "$input.params('latitude')",
"category" : "$input.params('category')",
"isbnNumber" : "$input.params('isbnNumber')",
"imageFile" : "$input.body"
}
I deployed Elasticsearch and Kibana 7.10.1. And I am streaming cloudwatch metrics data (raw json) to Elasticsearch.
The metric raw data format looks like:
{
"metric_stream_name" : "metric-stream-elk",
"account_id" : "264100014405",
"region" : "ap-southeast-2",
"namespace" : "AWS/DynamoDB",
"metric_name" : "ReturnedRecordsCount",
"dimensions" : {
"Operation" : "GetRecords",
"StreamLabel" : "2021-06-18T01:12:31.851",
"TableName" : "dev-dms-iac-events"
},
"timestamp" : 1624924620000,
"value" : {
"count" : 121,
"sum" : 0,
"max" : 0,
"min" : 0
},
"unit" : "Count"
}
I can see that these raw data are saved in Elasitcsearch with a custom index name aws-metrics-YYYY-MM-DD. Now how can I let Kibana read metrics from this index?
I don't want to use metricbeat because it queries metrics from AWS. My event flow is streaming AWS metrics to Elasticsearch. How can I achieve that?
Is it possible to capture the startTime and endTime of execution of lambda functions along with parameters that were passed to it ?
I couldn't find any state-change event configurations that could be configured to send events when lambda function starts/terminates?
A crappy alternative is to record parameters & start time in database when the lambda is being invoked and have the lambda update the endgame as final step before it's completion. This appears prone to failures scenarios like function erroring out before updating DB.
Are there other alternatives to capture this information
aws x-ray may be a good solution here. It is easy to integrate and use. You may enable it aws console.
Go to your lambda function/ configuration tab
Scroll down & in AWS X-Ray box choose active tracing.
Without any configuration in the code, it is going to record start_time and end_time of the function with additional meta data. You may integrate it as a library to your lambda function and send additional subsegments such as request parameters. Please check here for documentation
Here is a sample payload;
{
"trace_id" : "1-5759e988-bd862e3fe1be46a994272793",
"id" : "defdfd9912dc5a56",
"start_time" : 1461096053.37518,
"end_time" : 1461096053.4042,
"name" : "www.example.com",
"http" : {
"request" : {
"url" : "https://www.example.com/health",
"method" : "GET",
"user_agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.7",
"client_ip" : "11.0.3.111"
},
"response" : {
"status" : 200,
"content_length" : 86
}
},
"subsegments" : [
{
"id" : "53995c3f42cd8ad8",
"name" : "api.example.com",
"start_time" : 1461096053.37769,
"end_time" : 1461096053.40379,
"namespace" : "remote",
"http" : {
"request" : {
"url" : "https://api.example.com/health",
"method" : "POST",
"traced" : true
},
"response" : {
"status" : 200,
"content_length" : 861
}
}
}
]
}
Using Amazon Data Pipeline, I'm trying to use a SqlActivity to execute some SQL on a non-Redshift data store (SnowflakeDB, for the curious). It seems like it should be possible to do that with a SqlActivity that uses a JdbcDatabase. My first warning was when the wysiwyg editor on Amazon didn't even let me try to create a JdbcDatabase, but I plowed on anyway and just wrote and uploaded a Json definition by hand, myself (here's the relevant bit):
{
"id" : "ExportToSnowflake",
"name" : "ExportToSnowflake",
"type" : "SqlActivity",
"schedule" : { "ref" : "DefaultSchedule" },
"database" : { "ref" : "SnowflakeDatabase" },
"dependsOn" : { "ref" : "ImportTickets" },
"script" : "COPY INTO ZENDESK_TICKETS_INCREMENTAL_PLAYGROUND FROM #zendesk_incremental_stage"
},
{
"id" : "SnowflakeDatabase",
"name" : "SnowflakeDatabase",
"type" : "JdbcDatabase",
"jdbcDriverClass" : "com.snowflake.client.jdbc.SnowflakeDriver",
"username" : "redacted",
"connectionString" : "jdbc:snowflake://redacted.snowflakecomputing.com:8080/?account=redacted&db=redacted&schema=PUBLIC&ssl=on",
"*password" : "redacted"
}
When I upload this into the designer, it refuses to activate, giving me this error message:
ERROR: 'database' values must be of type 'RedshiftDatabase'. Found values of type 'JdbcDatabase'
The rest of the pipeline definition works fine without any errors. I've confirmed that it activates and runs to success if I simply leave this step out.
I am unable to find a single mention on the entire Internet of someone actually using a JdbcDatabase from Data Pipeline. Does it just plain not work? Why is it even mentioned in the documentation if there's no way to actually use it? Or am I missing something? I'd love to know if this is a futile exercise before I blow more of the client's money trying to figure out what's going on.
In your JdbcDatabase you need to have the following property:
jdbcDriverJarUri: "[S3 path to the driver jar file]"