My Cloudfront distribution is aggressively caching my index.html file on the initial GET request. While I use versioned file names for all other files, I want the freshest index.html on every request. The origin is an S3 bucket and I upload my file using the --cache-control flag:
aws s3 cp index.html s3://buck/index.html --cache-control "max-age=0"
When the above was not working, I tried:
aws s3 cp index.html s3://buck/index.html --cache-control "no-store"
I have verified that the headers are set on the S3 object:
> aws s3api head-object --bucket buck --key index.html
{
"AcceptRanges": "bytes",
"LastModified": "Tue, 31 Mar 2020 14:10:40 GMT",
"ContentLength": 3265,
"ETag": "",
"CacheControl": "no-store",
"ContentType": "text/html",
"Metadata": {}
}
What smells is that after I change the cache control to no-store, the response headers in the browser still show cache-control: max-age=0. I also see x-cache: Miss from cloudfront but I guess that would be expected? Opening the site in a private window shows the correct response header.
My CacheBehaviors for the distribution are:
{
"Quantity": 1,
"Items": [
{
"PathPattern": "index.html",
"TargetOriginId": "hostingS3Bucket",
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "none"
},
"Headers": {
"Quantity": 0
},
"QueryStringCacheKeys": {
"Quantity": 0
}
},
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"ViewerProtocolPolicy": "redirect-to-https",
"MinTTL": 0,
"AllowedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
]
}
},
"SmoothStreaming": false,
"DefaultTTL": 0,
"MaxTTL": 0,
"Compress": true,
"LambdaFunctionAssociations": {
"Quantity": 0
},
"FieldLevelEncryptionId": ""
}
]
}
Going by the aws cloudfront docs it seems like CloudFront and browsers should respect the headers when origin adds Cache-Control: no-store. I'm not sure what I am doing wrong. Do I have to invalidate the cache? I would like to understand this behavior so I can update my site in a predictable way.
Related
An attempt to integrate an API with my application that was built using Nuxt.js and hosted with AWS Amplify. I've added a proxy, it works perfectly in local but it returns 405 MethodNotAllowed in AWS server for a POST method.
For the proxy, I've made the changes as following to rewrite the path:
axios: {
proxy: true
},
proxy: {
'/lead/': { target: 'https://api.apidomain.org/v2', pathRewrite: { '^/lead/': '' },
changeOrigin: true }
},
I've read the Amplify documentation where we can update the redirects so I've tried
[
{
"source": "/<*>",
"target": "/index.html",
"status": "404-200",
"condition": null
},
{
"source": "</^[^.]+$|\\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/>",
"target": "/index.html",
"status": "200",
"condition": null
},
{
"source": "/lead/<*>",
"target": "https://api.apidomain.org/v2/<*>",
"status": "200",
"condition": null
}
]
The first two rules are the defaults and I added the third rule but still getting the 405 MethodNotAllowed error. What am I missing?
Amplify Redirects are executed from the top of the list down. This has been fixed by reorder the rules.
[
{
"source": "/lead/<*>",
"target": "https://api.apidomain.org/v2/<*>",
"status": "200",
"condition": null
},
{
"source": "</^[^.]+$|\\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|woff2|ttf|map|json)$)([^.]+$)/>",
"target": "/index.html",
"status": "200",
"condition": null
}
]
I am trying to save my audio feed to AWS S3.
Acquire and start call give proper response as given in the documentation, but when I try to stop the recording it throws a 404 error code. Also the recording is not found in AWS S3 bucket.
Below are the request and response for each of the call
/acquire
#Request body
body = {"cname": cname, "uid": uid, "clientRequest": {"resourceExpiredHour": 24}}
#Response
"Code": 200,
"Body":
{
"resourceId": "IqCWKgW2CD0KqnZm0lcCzQisVFotYiClVu2jIxWs5Rpidc9y5HhK1HEHAd77Fy1-AK9piRDWUYNlU-AC7dnZfo6QVukbSB_eh3WqTv9_ULLK-EXxt93zdO8yAzY-3SGMPVJ5x4Rx3DsHgvBfnzJWhOvjMFEcEU9X4WMmtdXJxqjV3hhpsx74tefhzfPA2A7J2UDlmF4RRuINeP4C9uMRzPmrHlHB3BrQcogcBfdgb9DAx_ySNMUXGMQX3iGFuWBtjNRB4OLA2HS04VkSRulx3IyC5zkambri3ROG6vFV04jsPkeWb3hKAdOaozYyH4Sq42Buu7dM2ndVxCMgoiPDCi-0JCBL77RkuOijiOGQtOU-w9QKoPlTXRNeTur1MSfouE0A-4eDgu79FxK5abX7dckwcv9R3AExvs47U-uhmBh8vE6NXx4dQrXsu9Krx7Ao"
}
/start
#Request body
body = {
"uid": uid,
"cname": cname,
"clientRequest": {
"recordingConfig": {
"maxIdleTime": 30,
"streamTypes": 0,
"channelType": 0,
},
"recordingFileConfig": {"avFileType": ["hls"]},
"storageConfig": {
"accessKey": ACCESS_ID,
"region": 8,
"bucket": BUCKET_NAME,
"secretKey": ACCESS_SECRET,
"vendor": 1,
"fileNamePrefix": [cname, TODAY_DATE.strftime("%d%m%Y")],
},
},
}
#Response
"Code": 200,
"Body":
{
"sid": "fd987833cb49dc9ba98ceb8498ac23c4",
"resourceId": "IqCWKgW2CD0KqnZm0lcCzQisVFotYiClVu2jIxWs5Rpidc9y5HhK1HEHAd77Fy1-AK9piRDWUYNlU-AC7dnZfo6QVukbSB_eh3WqTv9_ULLK-EXxt93zdO8yAzY-3SGMPVJ5x4Rx3DsHgvBfnzJWhOvjMFEcEU9X4WMmtdXJxqjV3hhpsx74tefhzfPA2A7J2UDlmF4RRuINeP4C9uMRzPmrHlHB3BrQcogcBfdgb9DAx_ySNMUXGMQX3iGFuWBtjNRB4OLA2HS04VkSRulx3IyC5zkambri3ROG6vFV04jsPkeWb3hKAdOaozYyH4Sq42Buu7dM2ndVxCMgoiPDCi-0JCBL77RkuOijiOGQtOU-w9QKoPlTXRNeTur1MSfouE0A-4eDgu79FxK5abX7dckwcv9R3AExvs47U-uhmBh8vE6NXx4dQrXsu9Krx7Ao"
}
/stop
#Request body
body = {"cname": cname, "uid": uid, "clientRequest": {}}
#Response
{
"resourceId": "IqCWKgW2CD0KqnZm0lcCzQisVFotYiClVu2jIxWs5Rpidc9y5HhK1HEHAd77Fy1-AK9piRDWUYNlU-AC7dnZfo6QVukbSB_eh3WqTv9_ULLK-EXxt93zdO8yAzY-3SGMPVJ5x4Rx3DsHgvBfnzJWhOvjMFEcEU9X4WMmtdXJxqjV3hhpsx74tefhzfPA2A7J2UDlmF4RRuINeP4C9uMRzPmrHlHB3BrQcogcBfdgb9DAx_ySNMUXGMQX3iGFuWBtjNRB4OLA2HS04VkSRulx3IyC5zkambri3ROG6vFV04jsPkeWb3hKAdOaozYyH4Sq42Buu7dM2ndVxCMgoiPDCi-0JCBL77RkuOijiOGQtOU-w9QKoPlTXRNeTur1MSfouE0A-4eDgu79FxK5abX7dckwcv9R3AExvs47U-uhmBh8vE6NXx4dQrXsu9Krx7Ao",
"sid": "fd987833cb49dc9ba98ceb8498ac23c4",
"code": 404,
"serverResponse": {
"command": "StopCloudRecorder",
"payload": {
"message": "Failed to find worker."
},
"subscribeModeBitmask": 1,
"vid": "431306"
}
}
My AWS bucket CORS policy is as follows:
[
{
"AllowedHeaders": [
"Authorization",
"*"
],
"AllowedMethods": [
"HEAD",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header",
"x-amz-storage-class"
],
"MaxAgeSeconds": 5000
}
]
I was facing the same issue.
I wanted to record a session even with one user, but seems that this is not possible, must be two users with different uid. Not sure why but at least the following worked for me.
Try:
Start your stream with one user like uid: 1
Connect with another device to the same channel but different uid.
Start recording
Before stop your recording make sure that query request is returning data
If you are getting data from query request, then your stream is recording.
Enabled all methods:
Try to use cloud front distribution with these in aws cli:
{
"TargetOriginId": "S3-AAAAA",
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"ViewerProtocolPolicy": "allow-all",
"AllowedMethods": {
"Quantity": 7,
"Items": [
"HEAD",
"DELETE",
"POST",
"GET",
"OPTIONS",
"PUT",
"PATCH"
],
"CachedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
]
}
},
"SmoothStreaming": false,
"Compress": false,
"LambdaFunctionAssociations": {
"Quantity": 1,
"Items": [
{
"LambdaFunctionARN": "arn:aws:lambda:us-east-1:AAAA",
"EventType": "origin-request",
"IncludeBody": true
}
]
},
"FieldLevelEncryptionId": "",
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "all"
},
"Headers": {
"Quantity": 0
},
"QueryStringCacheKeys": {
"Quantity": 0
}
},
"MinTTL": 0,
"DefaultTTL": 86400,
"MaxTTL": 31536000
}
Get requests returns fine, however, I can't setup POST requests
Example of response to POST request:
I don't need to upload to S3 based on POST, I need to be able to send POST requests to static website.
UPD:
doesn't work with custom origin also:
UPD:
resolved by destroying and creating a new CloudFront with the same settings
If you are using OAI to connect CloudFront with S3, then POST is not supported. From docs:
POST requests are not supported.
find a way to resolve by destroying and creating a new CDN
how to get the estimated cost of an ec2-instance of tpe m4.large with ebs storage of about 500gb through java sdk ? is there any specific sdk provided by aws for this ? i have tried to look many aws api's but i didn't find any , the link i find was very hard to understand and fetch value from here in terms of instance type and cost, here is the link:
https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonS3/current/us-east-1/index.json
is there any java api or sdk available to fetch estimated cost of an instance ?
You can use AWS Cost Management APIs,
The Cost Explorer API allows you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for Amazon DynamoDB database tables in your production environment.
By GetCostAndUsage you can get estimated cost.for more information read the following documents on the AWS website:
https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_GetCostAndUsage.html
https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/Welcome.html
There is sample request:
POST / HTTP/1.1
Host: ce.us-east-1.amazonaws.com
x-amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=contenttype;date;host;user-agent;x-amz-date;x-amz-target;x-amzn-requestid,Signature=<Signature>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Connection: Keep-Alive
X-Amz-Target: AWSInsightsIndexService.GetCostAndUsage
{
"TimePeriod": {
"Start":"2017-09-01",
"End": "2017-10-01"
},
"Granularity": "MONTHLY",
"Filter": {
"Dimensions": {
"Key": "SERVICE",
"Values": [
"Amazon Simple Storage Service"
]
}
},
"GroupBy":[
{
"Type":"DIMENSION",
"Key":"SERVICE"
},
{
"Type":"TAG",
"Key":"Environment"
}
],
"Metrics":["BlendedCost", "UnblendedCost", "UsageQuantity"]
}
And response:
HTTP/1.1 200 OK
x-amzn-RequestId: <RequestId>
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Date: <Date>
{
"GroupDefinitions": [
{
"Key": "SERVICE",
"Type": "DIMENSION"
},
{
"Key": "Environment",
"Type": "TAG"
}
],
"ResultsByTime": [
{
"Estimated": false,
"Groups": [
{
"Keys": [
"Amazon Simple Storage Service",
"Environment$Prod"
],
"Metrics": {
"BlendedCost": {
"Amount": "39.1603300457",
"Unit": "USD"
},
"UnblendedCost": {
"Amount": "39.1603300457",
"Unit": "USD"
},
"UsageQuantity": {
"Amount": "173842.5440074444",
"Unit": "N/A"
}
}
},
{
"Keys": [
"Amazon Simple Storage Service",
"Environment$Test"
],
"Metrics": {
"BlendedCost": {
"Amount": "0.1337464807",
"Unit": "USD"
},
"UnblendedCost": {
"Amount": "0.1337464807",
"Unit": "USD"
},
"UsageQuantity": {
"Amount": "15992.0786663399",
"Unit": "N/A"
}
}
}
],
"TimePeriod": {
"End": "2017-10-01",
"Start": "2017-09-01"
},
"Total": {}
}
]
}
For Java SDK check this page:
https://docs.aws.amazon.com/goto/SdkForJava/ce-2017-10-25/GetCostAndUsage
Also, the AWS Price List Service is helpful for resources you have not them already on your account, for example, if you want to create an AWS calculator.
GetProducts API gives you the full price information, and base on that you can calculate on your side.
Sample request GerProducts:
POST / HTTP/1.1
Host: api.pricing.<region>.<domain>
x-amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=contenttype;date;host;user-agent;x-amz-date;x-amz-target;x-amzn-requestid,Signature=<Signature>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Connection: Keep-Alive
X-Amz-Target: AWSPriceListService.GetProducts
{
"Filters": [
{
"Type": "TERM_MATCH",
"Field": "ServiceCode",
"Value": "AmazonEC2"
},
{
"Type": "TERM_MATCH",
"Field": "volumeType",
"Value": "Provisioned IOPS"
}
],
"FormatVersion": "aws_v1",
"NextToken": null,
"MaxResults": 1
}
and response:
HTTP/1.1 200 OK
x-amzn-RequestId: <RequestId>
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Date: <Date>
{
"FormatVersion": "aws_v1",
"NextToken": "57r3UcqRjDujbzWfHF7Ciw==:ywSmZsD3mtpQmQLQ5XfOsIMkYybSj+vAT+kGmwMFq+K9DGmIoJkz7lunVeamiOPgthdWSO2a7YKojCO+zY4dJmuNl2QvbNhXs+AJ2Ufn7xGmJncNI2TsEuAsVCUfTAvAQNcwwamtk6XuZ4YdNnooV62FjkV3ZAn40d9+wAxV7+FImvhUHi/+f8afgZdGh2zPUlH8jlV9uUtj0oHp8+DhPUuHXh+WBII1E/aoKpPSm3c=",
"PriceList": [
"{\"product\":{\"productFamily\":\"Storage\",\"attributes\":{\"storageMedia\":\"SSD-backed\",\"maxThroughputvolume\":\"320 MB/sec\",\"volumeType\":\"Provisioned IOPS\",\"maxIopsvolume\":\"20000\",\"servicecode\":\"AmazonEC2\",\"usagetype\":\"CAN1-EBS:VolumeUsage.piops\",\"locationType\":\"AWS Region\",\"location\":\"Canada (Central)\",\"servicename\":\"Amazon Elastic Compute Cloud\",\"maxVolumeSize\":\"16 TiB\",\"operation\":\"\"},\"sku\":\"WQGC34PB2AWS8R4U\"},\"serviceCode\":\"AmazonEC2\",\"terms\":{\"OnDemand\":{\"WQGC34PB2AWS8R4U.JRTCKXETXF\":{\"priceDimensions\":{\"WQGC34PB2AWS8R4U.JRTCKXETXF.6YS6EN2CT7\":{\"unit\":\"GB-Mo\",\"endRange\":\"Inf\",\"description\":\"$0.138 per GB-month of Provisioned IOPS SSD (io1) provisioned storage - Canada (Central)\",\"appliesTo\":[],\"rateCode\":\"WQGC34PB2AWS8R4U.JRTCKXETXF.6YS6EN2CT7\",\"beginRange\":\"0\",\"pricePerUnit\":{\"USD\":\"0.1380000000\"}}},\"sku\":\"WQGC34PB2AWS8R4U\",\"effectiveDate\":\"2017-08-01T00:00:00Z\",\"offerTermCode\":\"JRTCKXETXF\",\"termAttributes\":{}}}},\"version\":\"20170901182201\",\"publicationDate\":\"2017-09-01T18:22:01Z\"}"
]
}
For more info read the following doc:
https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_pricing_GetProducts.html
Finally, you can get the idea from AWS Cost calculator:
https://calculator.s3.amazonaws.com/index.html
I have an AWS API Gateway acting as a proxy to a backend service:
{
"apiKeySource": "HEADER",
"name": "-",
"createdDate": 1513820260,
"binaryMediaTypes": [
"application/zip",
"application/octet-stream"
],
"endpointConfiguration": {
"types": [
"EDGE"
]
},
"id": "-"
}
The integration definition is here:
{
"integrationResponses": {
"200": {
"responseTemplates": {
"application/json": null
},
"statusCode": "200"
}
},
"passthroughBehavior": "WHEN_NO_MATCH",
"timeoutInMillis": 29000,
"uri": "http://${stageVariables.backend}:7000/{proxy}",
"connectionType": "INTERNET",
"httpMethod": "ANY",
"cacheNamespace": "iv06s3",
"type": "HTTP_PROXY",
"requestParameters": {
"integration.request.path.proxy": "method.request.path.proxy",
"integration.request.header.X-Source-IP": "context.identity.sourceIp"
},
"cacheKeyParameters": [
"method.request.path.proxy"
]
}
I have an endpoint that generates a Zip file on the fly and returns it to the requester.
When I access the endpoint directly, the file is fine. When I access it via the API Gateway, it gets corrupted.
The corruption takes the form of bytes in the original file being converted to 0xEFBFBD. This is the UTF-8 'replacement character'.
My request has Accept set to application/zip and the response has Content-Type: application/zip.
My expectation is that the API Gateway should recognize this as a binary media type and leave the file alone, but it seems pretty clear that it's processing it as text content.
What am I doing wrong?
Setting the "Binary Media Type" to "multipart/form-data" resolved a similar issue for me.
See here: AWS Api Gateway as a HTTP Proxy is currupting binary uploaded image files