Is anyone getting this issue with Google Cloud Run Domain Mapping? When I add a custom domain to my domain mappings, I get this:
Waiting for certificate provisioning. You must configure your DNS records for certificate issuance to begin.
I know it says it's only added 1 day ago and I should give it time, but I actually let it go for 5 days, deleted it, and this is my second try.
You can see in the below screenshot that it is added via Cloudflare. I even tried toggling the Proxy service on and off with no luck.
Turning proxying off in CloudFlare resolved the issue in my case (keeping it as DNS only).
Most likely the Google balancer needs to get the request first-hand in order to make the certificate safe.
I faced the same issue with exact error:
Waiting for certificate provisioning. You must configure your DNS records for certificate issuance to begin.
After digging a bit more the error actually made sense. Before generating the cert Google is trying to check if our DNS records are properly configured and well propagated through all regions which is not the case for me due to some glitch at the nameserver level. I raised a ticket with my nameserver vendor with the DNS propagation report from the below tools/websites which clearly showed that the DNS records are not available in some regions. Once they fixed the propagation issue, all my reports started to show positive results after which I recreated my domain mapping and it worked within few minutes.
Tools used to check DNS propagation status:
https://dnspropagation.net/
https://www.whatsmydns.net/
https://dnschecker.org/
At the moment, seems like Domain Mapping is just a buggy service.
Seems like the solution at the moment is to be patient and to try several times until it works. I'd suggest to give it some time between attempts.
The reasons why I feel it's a buggy service:
gcloud beta run domain-mappings create stucks at Creating......⠼.
gcloud beta run domain-mappings describe shows messages such as:
"Domain mapping '[...domain_name...]' already exists for this application.
You can modify this domain mapping with DomainMappings.PATCH".
"Waiting for certificate provisioning. You must configure your DNS records
for certificate issuance to begin." - Even though the DNS records are fine.
User Interface isn't any better. It also can stuck while creating... And in the console, it says that it may fail silently, suggesting to use gcloud CLI as a workaround
Update 2022
It's been a while since I last used this feature but it is still taking ~2 hours for the domain to become available.
I just tried Toggling the proxy off again it seemed to work. They must have fixed something internally.
I had the same issue in past few days, the loading icon was spinning for hours/day and my DNS records were correct (checked in google toolbox). I "resolved" this issue just by repetitive add/remove of the domain, after like four attempts it suddenly started to working. I always waited for hour+ before each attempt. I used the GCR interface, not the console solution. I guess, as was mentioned before, it's because it's still BETA, but maybe this comment might help someone till they resolve this issue.
Adding the domain mapping via the console does not show the correct DNS records to be added as is it missing the name field. If you run gcloud beta run domain-mappings create it shows the DNS records as having a name field with the value of the cloud run service.
I had a similar error on a domain I bought with Goddady, the issue was a result of a parking domain whose source I can't tell unless it was set by the vendor. It mapped my domain to this page and its IP 34.102.136.180 was preventing my service from mapping correctly. After chatting with a gae assistant I was able to resolve the issue by deleting the IP, but of course, sought clarification from the vendor themselves. It was my first time using Godaddy and for the life of me I couldn't figure out the problem.
I had the same situation. Additionally incurred me error message on cloud domains.
Your domain is suspended because the registrant email address has not
yet been verified. Check your email and follow the instructions to
remove the suspension.
Related
This may be a very simple thing, but I am pretty new to GCP and don't really understand how all this stuff works so please bear with me.
I am trying to host a static site with GCP. My site is built with Jekyll and I am using GCP containers to deploy it. I got that part working.
I then wanted to give it a human-friendly URL. I bought one using the GCP console and then went to create a domain name mapping. So far I have been waiting for a couple of days. I read on some other similar posts that canceling and restarting the mapping process helped with the issue, but I've tried 3 times so far waiting ~24 hours between each, and no luck still.
It tells me that I need to configure the DNS records with my domain host, but if I understand it correctly GCP is my domain host. I have also followed the instructions here and still no luck.
Am I doing something wrong or perhaps I am missing something here?
Note: I have DNSSEC on, maybe that makes a difference.
I have seen this question asked before concerning extracting Snowflake data on Tableau Server(v 2020.3 with 2020.3 desktop version), however so far none of the solutions have solved the issue.
The error I am seeing is this job failed on Feb 16, 2021, 3:16 PM after running for 1.9 min because of: com.tableausoftware.nativeapi.exceptions.ConnectivityException: [Snowflake][Snowflake] (4) REST request for URL[my URL] failed: CURLerror (curl_easy_perform() failed) - code=7 msg='Couldn't connect to server' osCode=10060 osMsg='Unknown error'.
I have asked the Tableau and Snowflake admins about network settings etc. and am told everything is set up correctly. However, there is another group within the company who is following the same process, and their extract refreshes are working fine. Could it be a set up on AWS? a Snowflake issue? a network proxy? the Tableau server version? I am using Server 2020.3.
Thank you!
In my past experience, this has almost always been a networking-related issue such as proxy configuration. This function gives the list of endpoints that will need to be whitelisted or bypassed in any firewall, proxy, security policy, etc. One of the endpoints will be an internal S3 bucket so make sure they are all included. The networking team should be able to trace traffic (i.e., the packet flow).
Additionally, you could try to look for more clues in logs (Tableau or Snowflake ODBC) which might show some connectivity errors. Further troubleshooting could be with SnowCD or comparing the environmental differences between your team and the "group" you mention whose extracts are successful.
I am trying to follow this page from jetbrains: -
https://blog.jetbrains.com/teamcity/2019/03/teamcity-google-cloud-deployment/
I have enabled the APIs, I have created an external IP address, I have setup my A record, and it resolves correctly. So then I follow the next step and in my google cloud console I am issuing the following:-
gcloud deployment-manager deployments create --template https://raw.githubusercontent.com/JetBrains/teamcity-google-template/master/teamcity.jinja --properties zone:,ipAddress:,domainName:,domainOwnerEmail:
I have filled in the fields with the relevant values, and I press return.
It looks like its chuntering away for a bit and then I get the error message
"Creation of legacy mode networks is deprecated. Please create a subnet mode network instead by removing the IPv4Range field and adding the autoCreateSubnetworks field to your network insert request.","reason":"badRequest"}],"message":"Creation of legacy mode networks is deprecated. Please create a subnet mode network instead by removing the IPv4Range field and adding the autoCreateSubnetworks field to your network insert request.... "
I have no clue what this means or what I do to make it work
I was surpised as this was only from march 2019 on the jetbrains site, and the instructions don't seem to work, I am quite familiar with teamcity, using it every day for the last 8 years, but i'm not at all familiar with google cloud, so I need some pointers or instruction on how to do this...
Regards Julian
I think that this error is related to this line which still trying to use legacy-network CDIR. However, seems that using legacy-networks is deprecated in favor of subnet mode networks. Also referenced here.
So, you can create an issue on JetBrains side for them to fix this.
I am working on a serverless setup for a project and ran into a strange error. This was working fine before I had to delete my old certificates and make a new one.
In short, I am following the tutorial series at serverless-stack.com for reference, and when running the apig-test command I get the following error.
{ status: 403,
statusText: 'Forbidden',
data: { message: 'Forbidden' } }
This screams to me policy error. So I went to check my policy to make sure it allows execution for the AuthRole and indeed it does. I verified this in IAM section under Roles and looked my services Auth_Role that I created when I set up Cognito.
I don't want to give information overload here, but if anyone has any ideas for where to look next I would be much appreciative and I'll give any details you want to see here.
One thing I want to note is that if I run the apig-test command with the direct URL to the Lambda function instead of my domain it works perfectly fine.
This proves that nothing is wrong with my code but more a policy setting regarding how I setup the domain.
I ran sls create_domain accordingly and I see the entries in the Route53 & API Gateway and they have finished their 40 minutes many hours ago. I insured its using correct certificate since I wiped out the other one.
My custom domains have worked in the past thanks to a plugin I found and this tutorial here (https://serverless.com/blog/serverless-api-gateway-domain/), its only recently that it stopped working when I realized I needed to add some more domains to my SSL cert.
So I assume the policy error is somewhere around this but not sure where to look?
Ok I found the answer. In the API Gateway under custom domains there is a section called Base Path Mappings This MUST be set to one of your functions with the default path of / (or just enter nothing for the path) and then the destination to your lambda service. This seemed to make it work for me.
I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!