Enabled all methods:
Try to use cloud front distribution with these in aws cli:
{
"TargetOriginId": "S3-AAAAA",
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"ViewerProtocolPolicy": "allow-all",
"AllowedMethods": {
"Quantity": 7,
"Items": [
"HEAD",
"DELETE",
"POST",
"GET",
"OPTIONS",
"PUT",
"PATCH"
],
"CachedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
]
}
},
"SmoothStreaming": false,
"Compress": false,
"LambdaFunctionAssociations": {
"Quantity": 1,
"Items": [
{
"LambdaFunctionARN": "arn:aws:lambda:us-east-1:AAAA",
"EventType": "origin-request",
"IncludeBody": true
}
]
},
"FieldLevelEncryptionId": "",
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "all"
},
"Headers": {
"Quantity": 0
},
"QueryStringCacheKeys": {
"Quantity": 0
}
},
"MinTTL": 0,
"DefaultTTL": 86400,
"MaxTTL": 31536000
}
Get requests returns fine, however, I can't setup POST requests
Example of response to POST request:
I don't need to upload to S3 based on POST, I need to be able to send POST requests to static website.
UPD:
doesn't work with custom origin also:
UPD:
resolved by destroying and creating a new CloudFront with the same settings
If you are using OAI to connect CloudFront with S3, then POST is not supported. From docs:
POST requests are not supported.
find a way to resolve by destroying and creating a new CDN
Related
Resolved! - Ended up just needing to contact Amazon Support to push it through.
I'm attempting to renew a certificate created in AWS Certificate Manager (ACM), but I'm stuck in the dreadful PENDING_VALIDATION status; this is a DNS validated certificate where I validated using the CNAME record.
Under domains I can see the domain validation has a status of Success and Renewal Status of Success
If I run aws acm describe-certificate --certificate-arn "examplearn", I get a return showing DomainValidationOptions with the ValidationStatus being success for the CNAME validation.
Replaced with "example" for sensitive values
{
"Certificate": {
"CertificateArn": "arn:aws:acm:us-east-1:example:certificate/certid",
"DomainName": "*.example.com",
"SubjectAlternativeNames": [
"*.example.com"
],
"DomainValidationOptions": [
{
"DomainName": "*.example.com",
"ValidationDomain": "*.example.com",
"ValidationStatus": "SUCCESS",
"ResourceRecord": {
"Name": "examplename",
"Type": "CNAME",
"Value": "examplevalue"
},
"ValidationMethod": "DNS"
}
],
"Serial": "",
"Subject": "CN=*.example.com",
"Issuer": "Amazon",
"CreatedAt": "2019-01-17T12:53:01-08:00",
"IssuedAt": "2021-10-22T21:21:50.177000-07:00",
"Status": "ISSUED",
"NotBefore": "2021-10-22T17:00:00-07:00",
"NotAfter": "2022-11-23T15:59:59-08:00",
"KeyAlgorithm": "RSA-2048",
"SignatureAlgorithm": "SHA256WITHRSA",
"InUseBy": [
"example",
"example",
"example",
"example"
],
"Type": "AMAZON_ISSUED",
"RenewalSummary": {
"RenewalStatus": "PENDING_VALIDATION",
"DomainValidationOptions": [
{
"DomainName": "*.example.com",
"ValidationDomain": "*.example.com",
"ValidationStatus": "SUCCESS",
"ResourceRecord": {
"Name": "examplename",
"Type": "CNAME",
"Value": "examplevalue"
},
"ValidationMethod": "DNS"
}
],
"UpdatedAt": "2022-09-21T23:39:15.161000-07:00"
},
"KeyUsages": [
{
"Name": "DIGITAL_SIGNATURE"
},
{
"Name": "KEY_ENCIPHERMENT"
}
],
"ExtendedKeyUsages": [
{
"Name": "TLS_WEB_SERVER_AUTHENTICATION",
"OID": "1.3.6.1.5.5.7.3.1"
},
{
"Name": "TLS_WEB_CLIENT_AUTHENTICATION",
"OID": "1.3.6.1.5.5.7.3.2"
}
],
"RenewalEligibility": "ELIGIBLE",
"Options": {
"CertificateTransparencyLoggingPreference": "ENABLED"
}
}
}
Followed instructions successfully in https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-pending-validation/ (checking cname response exactly matches what is in acm CNAME values when copy pasting)
The site domain registration is in Route 53 with NS pointing to cloudflare, where DNS is managed.
Is there something obvious that pops out to you? Thank you!
I'm not seeing my posixAccounts information from the following link:
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/get
{
"kind": "admin#directory#user",
"id": "8675309",
"etag": "\"UUID\"",
"primaryEmail": "email#example.com",
"name": {
"givenName": "Email",
"familyName": "Account",
"fullName": "Email Account"
},
"isAdmin": true,
"isDelegatedAdmin": false,
"lastLoginTime": "2021-08-04T21:11:17.000Z",
"creationTime": "2021-06-16T14:32:35.000Z",
"agreedToTerms": true,
"suspended": false,
"archived": false,
"changePasswordAtNextLogin": false,
"ipWhitelisted": false,
"emails": [
{
"address": "email#example.com",
"primary": true
},
{
"address": "email#example.com.test-google-a.com"
}
],
"phones": [
{
"value": "123-456-7890",
"type": "work"
}
],
"nonEditableAliases": [
"email#example.com.test-google-a.com"
],
"customerId": "id12345",
"orgUnitPath": "/path/to/org",
"isMailboxSetup": true,
"isEnrolledIn2Sv": false,
"isEnforcedIn2Sv": false,
"includeInGlobalAddressList": true
}
As you can see from the above output, there's no posixAccount information. I can open the ldap information in Apache Directory studio, so I know it's there, but I can't see it from the above output. Since I can see it though, I tried to update this using the update function in the API.
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/update
I used this for the payload as I'm just testing updating the gid information. I used the documentation below to get the entry details needed. At least as far as I could tell.
{
"posixAccounts": [
{
"gid": "12345",
}
]
}
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users
I'm getting a 200 response, but nothing is actually changing for the user when doing a PUT to update.
I tried a similar update method from another user on here, but no avail: Google Admin SDK - Create posix attributes on existing user
I was able to get this resolved by supplying additional details in my PUT request:
{
"posixAccounts": [
{
"username": "email(excluding #domain.com)",
"uid": "1234",
"gid": "12345",
"operatingSystemType": "unspecified",
"shell": "/bin/bash",
"gecos": "Firstname Lastname"
"systemId": ""
}
]
}
The above wouldn't reflect in LDAP until I put "systemId" in there. So that part is required.
An attempt to integrate an API with my application that was built using Nuxt.js and hosted with AWS Amplify. I've added a proxy, it works perfectly in local but it returns 405 MethodNotAllowed in AWS server for a POST method.
For the proxy, I've made the changes as following to rewrite the path:
axios: {
proxy: true
},
proxy: {
'/lead/': { target: 'https://api.apidomain.org/v2', pathRewrite: { '^/lead/': '' },
changeOrigin: true }
},
I've read the Amplify documentation where we can update the redirects so I've tried
[
{
"source": "/<*>",
"target": "/index.html",
"status": "404-200",
"condition": null
},
{
"source": "</^[^.]+$|\\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/>",
"target": "/index.html",
"status": "200",
"condition": null
},
{
"source": "/lead/<*>",
"target": "https://api.apidomain.org/v2/<*>",
"status": "200",
"condition": null
}
]
The first two rules are the defaults and I added the third rule but still getting the 405 MethodNotAllowed error. What am I missing?
Amplify Redirects are executed from the top of the list down. This has been fixed by reorder the rules.
[
{
"source": "/lead/<*>",
"target": "https://api.apidomain.org/v2/<*>",
"status": "200",
"condition": null
},
{
"source": "</^[^.]+$|\\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|woff2|ttf|map|json)$)([^.]+$)/>",
"target": "/index.html",
"status": "200",
"condition": null
}
]
I am trying to save my audio feed to AWS S3.
Acquire and start call give proper response as given in the documentation, but when I try to stop the recording it throws a 404 error code. Also the recording is not found in AWS S3 bucket.
Below are the request and response for each of the call
/acquire
#Request body
body = {"cname": cname, "uid": uid, "clientRequest": {"resourceExpiredHour": 24}}
#Response
"Code": 200,
"Body":
{
"resourceId": "IqCWKgW2CD0KqnZm0lcCzQisVFotYiClVu2jIxWs5Rpidc9y5HhK1HEHAd77Fy1-AK9piRDWUYNlU-AC7dnZfo6QVukbSB_eh3WqTv9_ULLK-EXxt93zdO8yAzY-3SGMPVJ5x4Rx3DsHgvBfnzJWhOvjMFEcEU9X4WMmtdXJxqjV3hhpsx74tefhzfPA2A7J2UDlmF4RRuINeP4C9uMRzPmrHlHB3BrQcogcBfdgb9DAx_ySNMUXGMQX3iGFuWBtjNRB4OLA2HS04VkSRulx3IyC5zkambri3ROG6vFV04jsPkeWb3hKAdOaozYyH4Sq42Buu7dM2ndVxCMgoiPDCi-0JCBL77RkuOijiOGQtOU-w9QKoPlTXRNeTur1MSfouE0A-4eDgu79FxK5abX7dckwcv9R3AExvs47U-uhmBh8vE6NXx4dQrXsu9Krx7Ao"
}
/start
#Request body
body = {
"uid": uid,
"cname": cname,
"clientRequest": {
"recordingConfig": {
"maxIdleTime": 30,
"streamTypes": 0,
"channelType": 0,
},
"recordingFileConfig": {"avFileType": ["hls"]},
"storageConfig": {
"accessKey": ACCESS_ID,
"region": 8,
"bucket": BUCKET_NAME,
"secretKey": ACCESS_SECRET,
"vendor": 1,
"fileNamePrefix": [cname, TODAY_DATE.strftime("%d%m%Y")],
},
},
}
#Response
"Code": 200,
"Body":
{
"sid": "fd987833cb49dc9ba98ceb8498ac23c4",
"resourceId": "IqCWKgW2CD0KqnZm0lcCzQisVFotYiClVu2jIxWs5Rpidc9y5HhK1HEHAd77Fy1-AK9piRDWUYNlU-AC7dnZfo6QVukbSB_eh3WqTv9_ULLK-EXxt93zdO8yAzY-3SGMPVJ5x4Rx3DsHgvBfnzJWhOvjMFEcEU9X4WMmtdXJxqjV3hhpsx74tefhzfPA2A7J2UDlmF4RRuINeP4C9uMRzPmrHlHB3BrQcogcBfdgb9DAx_ySNMUXGMQX3iGFuWBtjNRB4OLA2HS04VkSRulx3IyC5zkambri3ROG6vFV04jsPkeWb3hKAdOaozYyH4Sq42Buu7dM2ndVxCMgoiPDCi-0JCBL77RkuOijiOGQtOU-w9QKoPlTXRNeTur1MSfouE0A-4eDgu79FxK5abX7dckwcv9R3AExvs47U-uhmBh8vE6NXx4dQrXsu9Krx7Ao"
}
/stop
#Request body
body = {"cname": cname, "uid": uid, "clientRequest": {}}
#Response
{
"resourceId": "IqCWKgW2CD0KqnZm0lcCzQisVFotYiClVu2jIxWs5Rpidc9y5HhK1HEHAd77Fy1-AK9piRDWUYNlU-AC7dnZfo6QVukbSB_eh3WqTv9_ULLK-EXxt93zdO8yAzY-3SGMPVJ5x4Rx3DsHgvBfnzJWhOvjMFEcEU9X4WMmtdXJxqjV3hhpsx74tefhzfPA2A7J2UDlmF4RRuINeP4C9uMRzPmrHlHB3BrQcogcBfdgb9DAx_ySNMUXGMQX3iGFuWBtjNRB4OLA2HS04VkSRulx3IyC5zkambri3ROG6vFV04jsPkeWb3hKAdOaozYyH4Sq42Buu7dM2ndVxCMgoiPDCi-0JCBL77RkuOijiOGQtOU-w9QKoPlTXRNeTur1MSfouE0A-4eDgu79FxK5abX7dckwcv9R3AExvs47U-uhmBh8vE6NXx4dQrXsu9Krx7Ao",
"sid": "fd987833cb49dc9ba98ceb8498ac23c4",
"code": 404,
"serverResponse": {
"command": "StopCloudRecorder",
"payload": {
"message": "Failed to find worker."
},
"subscribeModeBitmask": 1,
"vid": "431306"
}
}
My AWS bucket CORS policy is as follows:
[
{
"AllowedHeaders": [
"Authorization",
"*"
],
"AllowedMethods": [
"HEAD",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header",
"x-amz-storage-class"
],
"MaxAgeSeconds": 5000
}
]
I was facing the same issue.
I wanted to record a session even with one user, but seems that this is not possible, must be two users with different uid. Not sure why but at least the following worked for me.
Try:
Start your stream with one user like uid: 1
Connect with another device to the same channel but different uid.
Start recording
Before stop your recording make sure that query request is returning data
If you are getting data from query request, then your stream is recording.
My Cloudfront distribution is aggressively caching my index.html file on the initial GET request. While I use versioned file names for all other files, I want the freshest index.html on every request. The origin is an S3 bucket and I upload my file using the --cache-control flag:
aws s3 cp index.html s3://buck/index.html --cache-control "max-age=0"
When the above was not working, I tried:
aws s3 cp index.html s3://buck/index.html --cache-control "no-store"
I have verified that the headers are set on the S3 object:
> aws s3api head-object --bucket buck --key index.html
{
"AcceptRanges": "bytes",
"LastModified": "Tue, 31 Mar 2020 14:10:40 GMT",
"ContentLength": 3265,
"ETag": "",
"CacheControl": "no-store",
"ContentType": "text/html",
"Metadata": {}
}
What smells is that after I change the cache control to no-store, the response headers in the browser still show cache-control: max-age=0. I also see x-cache: Miss from cloudfront but I guess that would be expected? Opening the site in a private window shows the correct response header.
My CacheBehaviors for the distribution are:
{
"Quantity": 1,
"Items": [
{
"PathPattern": "index.html",
"TargetOriginId": "hostingS3Bucket",
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "none"
},
"Headers": {
"Quantity": 0
},
"QueryStringCacheKeys": {
"Quantity": 0
}
},
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"ViewerProtocolPolicy": "redirect-to-https",
"MinTTL": 0,
"AllowedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
]
}
},
"SmoothStreaming": false,
"DefaultTTL": 0,
"MaxTTL": 0,
"Compress": true,
"LambdaFunctionAssociations": {
"Quantity": 0
},
"FieldLevelEncryptionId": ""
}
]
}
Going by the aws cloudfront docs it seems like CloudFront and browsers should respect the headers when origin adds Cache-Control: no-store. I'm not sure what I am doing wrong. Do I have to invalidate the cache? I would like to understand this behavior so I can update my site in a predictable way.