Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 14 days ago.
The community reviewed whether to reopen this question 14 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
My SSL certificate is stuck at Pending Validation. It eventually fails and times out after few days.
I own my domain on Route 53 and added the CNAME record in the newly created Public Hosted Zone. While requesting a certificate, I put fully qualifed domain name as watsky1337.link. I tried *.watsky1337.link but that also didn't change the outcome.
My cloudformation template for creating the certificate request:
{
"Resources":{
"MyCertificate":{
"Type":"AWS::CertificateManager::Certificate",
"Properties":{
"DomainName":"*.watsky1337.link",
"ValidationMethod":"DNS"
}
}
}
}
And, cloudformation template for adding CNAME record to the Public Hosted Zone:
{
"Resources": {
"myDNSRecord": {
"Type": "AWS::Route53::RecordSet",
"Properties": {
"HostedZoneId": "Z01832163FDLRM2C7PVYW",
"Name": "_42fb819b92f98e5ef699548b8d5a52df.watsky1337.link",
"ResourceRecords": [
"_13b8185f6fa218a71d9fbb82bfbe705c.ndlxkpgcgs.acm-validations.aws."
],
"Type": "CNAME",
"TTL": "900"
}
}
}
}
Here's a detailed view of the Certificate
And here are the record sets in this hosted zone
I have the required permissions to do all this because this is my own personal AWS account.
I tried to troubleshoot it by checking if my nameservers are visible if I check it on CLI but they aren't. What am I doing wrong?
Based on Mark B's comment, I changed the nameservers in registrar to the ones in public hosted zone so that they're same.
And now my Certificate's status is issued:
Nameservers are also visible now:
Related
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I have an application deployed to google cloud (java 11/ spring, classic stack) for over a year and a half (feb 2020) and it worked flawlessly until now.
I had CORS configured like this
#Bean
public FilterRegistrationBean simpleCorsFilter() {
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true);
config.setAllowedOrigins(Collections.singletonList("*")); // bad practice i know
config.setAllowedMethods(Collections.singletonList("*"));
config.setAllowedHeaders(Collections.singletonList("*"));
source.registerCorsConfiguration("/**", config);
FilterRegistrationBean bean = new FilterRegistrationBean(new CorsFilter(source), new ServletRegistrationBean[0]);
bean.setOrder(-2147483648);
return bean;
}
And frontend is hosted on firebase.
Since year and half nobody redeployed anything, but it started throwing CORS errors just now.
Is there some update google cloud / some policy I missed that it stopped working all of a sudden? Tried on EDGE and CHrome, as well as Safari, so it should not be browser...
Okay nevermind, so I found out that the BILLING ACCOUNT has expired somehow. Then the server was turned off and in that case you get this weird CORS error.
We have a google group setup to handle inbound support emails. We have mailgun setup to forward from various addr#ourdomain.com inboxes to that group. This worked fine up until recently. Now mailgun claims the delivery worked - but nothing shows up in the google group. Here is the mailgun detailed status of the message ...
"event": "delivered",
"delivery-status": {
"tls": true,
"mx-host": "aspmx.l.google.com",
"code": 250,
"description": "",
"session-seconds": 0.647407054901123,
"utf8": true,
"attempt-no": 1,
"message": "OK",
"certificate-verified": true
}
I don't know what else to do in order to debug this - and determine where the email is going # google?
Turns out this was tied to either a new or newly-changed setting in the google group whereby all new messages were going into a pending folder to be reviewed by a group moderator (this had not previously been the case). Changing that setting to bypass the pending folder and deliver straight to the group resolved the problem.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I would like to set up my workmail to get an incoming email but, I cannot see how and I have this message:
Your domain's incoming mail is not enabled
I think the original asker has abandoned this thread. But it is an issue that many people are struggling with and nobody has a really good answer. So I am going to show the screenshot as Flux was asking in the comment.
I am known to answer my own questions here, so by me getting on the case I will hopefully be able to catalyze a definitive answer.
I am managing my domain with AWS Rout 56 also, and surprised that there is nothing in workMail to just automatically set this up. You see that there are 3 DKIM related CNAME records that are said to be "Pending". This is odd, because I had many other records that needed setup, and I set them up manually and hit the retry button and every time it was discovering the DNS change immediately. Only with these 3 records it keeps with this weird "Pending" status.
Definitely in my case that 72 hours issue should not be an issue, it's not about DNS propagation. There is something else about this DKIM which is causing the "Pending" status.
UPDATE: This "Pending" status has now (~4 hours later) gone to "Verified". So there is nothing wrong with the DNS settings any more. Still the message remains "Your domain's incoming mail is not enabled."
Another thing that has been said elsewhere was to check the SES rules. I had a little clash trying to set up SES rules before I discovered WorkMail, then I went to set up WorkMail but deleted my bad SES rules. So now I added one simple rule back:
Under SES "Configure Email Receiving" there is one rule set called INBOUND_MAIL. In that under "View Active Rule Set" I created one rule:
Recipients: c...t.org
Actions:
1. WorkMail Action
Deliver mail to WorkMail organization m-26...71
Which is the ARN of my WorkMail organization. So that's there.
Still, with all that it keeps saying inbound email is not configured. In Thuderbird I have set up outbound email with IMAP and SMTP just fine, but replying to the emails sent from the new organization will always bounce with:
550 5.1.1 Requested action not taken: mailbox unavailable
UPDATE: I haven't forgotten about this issue yet, still not figured it out, but I will and will report here.
What I have is one platform stack, and possibly multiple web application stacks (each represent one web application). The platform stack deploys an ECS platform that allows hosting multiple web applications, but doesn't actually have any web applications. It's just a platform. Then, each web application stack represents a web application.
One of the HTTPS listeners I have in my platform stack template is this. Basically I have an HTTPS listener on port 443, and will carry one default certificate (by requirement you will need at least one certificate to create https listener):
"BsAlbListenerHttps": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"Certificates": [{
"CertificateArn": {
"Ref": "BsCertificate1"
}
}],
...
"Port": "443",
"Protocol": "HTTPS"
}
},
...
Now, let's say if I want to create a new web application (eg. www.example.com), I deploy the web application stack, specify some parameters, and obviously, I'll have to make a bunch of new resources. But at the same time, I will have to modify the current "BsAlbListenerHttps".
I'm able to import the current listener (using Imports and Exports) into my web application stack. But what I want to do is also add a new certificate for www.example.com to the listener.
I've tried looking around but failed to find any answer.
Does anyone know how to do this? Your help is appreciated. Thank you!
What I do in similar cases, is to use only one certificate for the entire region, and add domains to it as I add apps/listeners that are on different domains. I also do this per environment, so I have a staging cert and a production cert in 2 different templates. But for each you would define a standalone cert stack, called for example, certificate-production.json, but use the stack name as 'certificate' so that regardless of the environment, the stack reference is consistent:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "SSL certificates for production V2",
"Resources" : {
"Certificate" : {
"Type": "AWS::CertificateManager::Certificate",
"Properties": {
"DomainName": "*.example.com",
"SubjectAlternativeNames" : [ "*.example2.com","*.someotherdomain.com" ]
}
}
},
"Outputs": {
"CertificateId" : {
"Value" : {"Ref":"Certificate"},
"Description" : "Certificate ID",
"Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-CertificateId" }}
}
}
}
As you can see by using the SubjectAlternativeNames property, this certificate will serve 3 wild card domains. This way I can update the domains as I add services, and rerun the stack. The dependent listeners are not changed in anyway - they always refer to the single app certificate in the region.
One caveat: When you update a cert in CloudFormation, it will email all host administrtors on the given domain (hostmaster#example.com etc.). Each domain will get a confirmation email, and each email has to be confirmed again. If all the domains are not confirmed in this way, then the stack will fail to create/update.
Using this technique, I can manage SSL for all my apps without any trouble, while making it easy to add new ssl endpoints for new domains.
I create the certificate stack right after the main VPC stack, so all later stacks can refer to the certificate id defined here via an export.
I have a Node.js app on Elastic Beanstalk running on multiple ec2 instance behind a load balancer(elb).
Cause of the need of my app, i had to activate the session stickiness.
I activated the "AppCookieStickinessPolicy" using my custom cookie "sails.sid" as reference.
The problem is that my app need this cookie to work proprely, but as the moment I activate the session stickness (via Duration-Based Session Stickiness or in my case : Application-Controlled Session Stickiness), the headers going to my server are modified and I lost my custom cookie, who is replaced by the AWSELB (amazon ELB) cookie.
How can I configure the loadbalancer to not replace my cookie?
If I understood well, the AppCookieStickinessPolicies must keep my custom cookie but it's not the case.
I am doing wrong somewhere?
Thanks in advance
Description of my load balancer :
{
"LoadBalancerDescriptions": [
{
"AvailabilityZones": [
"us-east-1b"
],
....
"Policies": {
"AppCookieStickinessPolicies": [
{
"PolicyName": "AWSConsole-AppCookieStickinessPolicy-awseb-e-y-AWSEBLoa-175QRBIZFH0I8-1452531192664",
"CookieName": "sails.sid"
}
],
"LBCookieStickinessPolicies": [
{
"PolicyName": "awseb-elb-stickinesspolicy",
"CookieExpirationPeriod": 0
}
],
"OtherPolicies": []
},
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 80,
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"Protocol": "HTTP"
},
"PolicyNames": [
"AWSConsole-AppCookieStickinessPolicy-awseb-e-y-AWSEBLoa-175QRBIZFH0I8-1452531192664"
]
}
]
....
}
]
}
The sticky session cookie set by the ELB is used to identify what node in the cluster to route request to.
If you are setting a cookie in your application that you need to rely on, then expecting the ELB to use that cookie, it is going to overwrite the value you're setting.
Try simply allowing the ELB to manage the session cookie.
I spent a lot of time trying out the ELB stickiness features and routing requests from the same client to the same machine in a back-end cluster server.
Problem is, it didn't always work 100%, so I had to write a backup procedure using sessions stored in MySQL. But then, I realised I didn't need the ELB stickiness functionality, I could just use the MySQL session system.
It is more complex to write a database session system, and there is an overhead of course in that every http call will inevitably involve a database query. However, if this is query uses a Primary Index, it's not so bad.
The big advantage is that any request can go to any server. OR if one of your servers dies, the next one can handle the job just as well. For truly resilient applications, a database session system is inevitable.