VPC Access connector failed to get healthy - google-cloud-platform

I am getting below error while trying to create VPC Access connector in GCP region us-central1:
An internal error occurred: VPC Access connector failed to get healthy. Please check GCE quotas, logs and org policies and recreate.
I also tried to create the VPC access connector in region us-east1 but got the same issue.
I tried searching for existing bugs on gcp issues portal but could not find this issue.
I have tried to follow image access constraint but I don't have an organisation so I am unable to edit the required policy.

I am having the same issue. After reading this thread I checked different regions with exactly the same configuration:
Network: Default
Subnet: Custom IP range
IP range: 10.8.0.0/28
I can confirm that changing the area solves the issue. In my case, I proceeded successfully with australia-southeast2. Basically, when creating a VPC connector in Google Cloud, we have some regions working and some others are not.
It may be a capacity problem over some Google regions.

It can be an internal IP subnet assignment issue. This subnet must be used exclusively by the connector per the documentation
Every VPC connector requires its own /28 subnet to place connector instances on; this subnet must not have any other resources on it other than the VPC connector. If you don't use Shared VPC, you can either create a subnet for the connector to use, or specify an unused custom IP range for the connector to create a subnet for its use. If you choose the custom IP range, the subnet that is created is hidden and cannot be used in firewall rules and NAT configurations.
Or it can also be that you are missing the required image access constraint. In this case, you may follow this step by step guide in seting image access constraints
Go to the Organization policies page.
In the policies list, click Define trusted image projects.
Click Edit to customize your existing trusted image constraints.
On the Edit page, select Customize.
In the Policy values drop-down list, select Custom to set the constraint on specific image projects.
In the Policy type drop-down list, specify a value as follows:
-To restrict the specified image projects, select Deny.
-To remove restrictions for the specified image projects, select Allow.
In the Custom values field, enter the names of image projects using projects/IMAGE_PROJECT format. Replace IMAGE_PROJECT with the image project you want in this case “serverless-vpc-access-images“ to set constraints on. If you are setting project-level constraints, then they might conflict with the existing constraints set on your organization or folder.
Click New policy value to add multiple image projects.
Click Save to apply the constraint.
Additionally, please make sure that there are no firewall rules on your VPC network with a priority before 1000 that deny ingress from your connector's IP range.

Related

Google Cloud Platform Reserved Address Space in VPC Network Route

I need to add a route in GCP's VPC Network and when I do, I get an error (shown below) that appears to state there is an overlap/conflict with the 10.130.0.0/16 range. I unfortunately do not see this 10.130.0.0/16 in any route, in any region and I have no idea why this error is occurring.
Creating route "test" failed. Error: Invalid value for field 'resource.destRange': '10.130.90.82/32'. 10.130.90.82/32 hides the reserved address space for network (10.130.0.0/16).
I have tried adding this simple route in several gcp projects but they all fail and seem to imply there might be some hidden reserved address space. Could this be? What am I missing? This occurs with any route destination value that is in the 10.130.0.0/16 space eg; 10.130.90.82/32 10.130.90.0/24
For clarification here is an example Route that fails:
Additional Clarification, Here is the 'default' VPC Network:
Google Cloud does not allow you to create a new subnet or peering subnet route whose destination exactly matches or is broader than (would contain) an existing custom static route. For example, if your VPC network has a custom static route for the 10.70.1.128/25 destination, Google Cloud prohibits the creation of any subnet or peering subnet route with a primary or secondary subnet IP address range of 10.70.1.128/25, 10.70.1.0/24, or any other range that contains all the IP addresses in 10.70.1.128/25.
Kindly check Configuring private services access docs. Included on the docs are the considerations , Creating an IP allocation, Deleting an allocated IP address range etc.
There is in fact a hidden reserved address space in the default VPC network. I hesitate calling hidden as JaysonM mentioned it in his answer but it does not appear anywhere in the GPC console (to my knowledge).
The default VPC network is using 'auto' subnet creation mode. With this setting enabled the VPC network has a range of 10.128.0.0/9 (10.128.0.0 - 10.255.255.255) that cannot be overlapped. Simply setting 'auto' subnet creation of the VPC network to 'custom' will resolve this issue. Do also note this is a one-way change for your VPC network.
Cheers!

How can I prompt google to set up VPC peering from servicenetworking.googleapis.com?

I have some Cloud SQL instances that currently have public IP's. It would make certain security-minded people happy if I changed them to have private IP's.
I am following the instructions documented here: https://cloud.google.com/sql/docs/mysql/private-ip
A summary of those instructions:
Ensure your shared VPC host has servicenetworking.googleapis.com enabled
Ensure your project has servicenetworking.googleapis.com enabled
Allocate an IP address range for your new private IP's
Configure VPC network peering (https://cloud.google.com/sql/docs/mysql/configure-private-services-access)
Create cloud sql instance without public IP
Expect new instance's private IP to be in allocated range
I've completed these through step 4, and I'm seeing this:
My interpretation of that page is that I've done my part and now it's google's turn--but that was several days ago. Do I have do do something to prompt google to create the connection?
I think I'm focusing in the right place because if I try to use I private IP, gcloud tells me to go create the network that I'm waiting on:
❯ gcloud --project=my-project-name beta \
sql instances patch foo \
--network=my-network-name --no-assign-ip
The following message will be used for the patch API method.
{"name": "foo", "project": "my-project-name", "settings": {"ipConfiguration": {"ipv4Enabled": false, "privateNetwork": "https://compute.googleapis.com/compute/v1/projects/my-project-name/global/networks/my-network-name"}}}
Patching Cloud SQL instance...failed.
ERROR: (gcloud.beta.sql.instances.patch) [INTERNAL_ERROR] Failed to create subnetwork. Please create Service Networking connection with service 'servicenetworking.googleapis.com' from consumer project '11111111111' network 'my-network-name' again.
In general private services access is implemented as a VPC peering connection between your VPC network and the Google services VPC network where your Cloud SQL instance resides. As #JohnHanley pointed out, the VPC peering should be created within minutes so it’s not expected you have to wait more than that.
To check the peering creation on Stackdriver you can use the following Advanced Filter:
jsonPayload.event_subtype="compute.networks.addPeering"
That said, it makes sense the error you are observing when trying to patch your SQL Instance as the Peering hasn’t been created. Instead of ‘Inactive’ it should be ‘Peer VPC network is connected’
To sum up, in your scenario the Cloud SQL instance cannot get an IP on the aforementioned network as it cannot reach it.
At this specific point I would suggest you focus on the Peering creation. As you mentioned you tried recreating it and the status remains the same, it’s possible that there’s something in your project preventing the peering to be established.
I would also suggest you check the peering limits quota in case it has been reached:
gcloud compute networks peerings list --network='your network'
Also it would be good to review the VPC Peering Restrictions.
All that being said, if you still experience the same issue when creating the VPC Peering, an internal investigation may be required and I would suggest you to you to report this using this link
I hope this helps.

Amazon AWS elasticsearch Kibana access from browser

I know this issue has been already discussed before , Yet I feel my question is a bit different.
I'm trying to figure out how am I to enable access to the Kibana over the self manged AWS elastic search which I have in my AWS account .
Could be that what am I about to say here is inaccurate or complete nonsense .
I am pretty novice in the whole AWS VPC wise section and to ELK stuck.
Architecture:
Here is the "Architecture":
I have a VPC.
Within the VPC I have several sub nets.
Each server sends it's data to the elastic search using log stash which runs on the server itself. For simplicity lets assume I have a single server.
The elastic search https url which can be found in the Amazon console is resolved to an internal IP within the sub net that I have defined.
Resources:
I have found the following link which suggest to use one of two option:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Solutions:
Option 1: resource based policy
Either to allow resource based policy for elastic search by introducing condition which specify certain IP address.
This was discussed in the following thread but unfortunately did not work for me.
Proper access policy for Amazon Elastic Search Cluster
When I try to implement it in the Amazon console, Amazon notifies me that because I'm using Security group , I should resolve it by using security group.
Security group rules:
I tried to set a rule which allows my personal computer(Router) public IP to access Amazon elastic search ports or even opening all ports to my public IP.
But that didn't worked out.
I would be happy to get a more detailed explanation to why but I'm guessing that's because the elastic search has only internal IP and not public IP and because it is encapsulated within the VPC I am unable to access it from outside even if I define a rule for a public IP to access it.
Option 2: Using proxy
I'm decline to use this solution unless I have no other choice.
I'm guessing that if I set another server with public and internal IP within the same subnet and VPC as that of the elastic search , and use it as a proxy, I would be then be able to access this server from the outside by defining the same rules to the it's newly created security group . Like the article suggested.
Sources:
I found out of the box solution that some one already made for this issue using proxy server in the following link:
Using either executable or docker container.
https://github.com/abutaha/aws-es-proxy
Option 3: Other
Can you suggest other solution? Is it possible to use Amazon Load balancer or Amazon API gateway to accomplish this task?
I just need proof of concept not something which goes into production environment.
Bottom line:
I need to be able to aceess Kibana from browser in order to be able to search elastic search indexes.
Thanks a lot
The best way is with the just released Cognito authentication.
https://aws.amazon.com/about-aws/whats-new/2018/04/amazon-elasticsearch-service-simplifies-user-authentication-and-access-for-kibana-with-amazon-cognito/
This is a great way to authenticated A SINGLE USER. This is not a good way for the system you're building to access ElasticSearch.

Restrict access to amazon WorkSpace by IP Address?

I have a simple question which I don't think has a simple answer.
I would like to use Amazon Workspaces but a requirement would be that I can restrict the IP addresses that can access a or any workspace.
I kind of get the impression this should be possible through rules on the security group on the directory, but I'm not really sure, and I don't know where to start.
I've been unable to find any instructions for this or other examples of people having done this. Surely I'm not the first/only person to want to do this?!
Can anyone offer any pointers??
Based on the Comments given by the #Mayb2Moro; he obtained information from AWS Support that the restriction based on the Security Group or VPC wouldn't be possible as the Workspaces connectivity would go via. another external endpoint [management interface in the backend].
Yes you are right, you need to work on the security group configured while the workspace is setup. The process goes like this,
Pick the security group used while the Workspace bundle was created
Go to the EC2 -> Security Group and select the security group and restrict them to your Office's Exit IP.
PS : Image Source - http://www.itnews.com.au/Lab/381939,itnews-labs-amazon-workspaces.aspx
Now you can assign IP Access Control Groups to a Directory that is associated to your workspaces.
In the IP Access Control Group, you can specify the IPs that you wish to allow access to the workspaces.
Refer to the IP Access Control Groups for Your WorkSpaces for the official documentation.

How to open a specific port such as 9090 in Google Compute Engine

I have 2 Google Compute Engine instances and I want to open port 9090 in both the instances. I think we need to add some firewall rules.
Can you tell me how can I do that?
You need to:
Go to cloud.google.com
Go to my Console
Choose your Project
Choose Networking > VPC network
Choose "Firewall"
Choose "Create Firewall Rule"
To apply the rule to select VM instances, select Targets > "Specified target tags", and enter into "Target tags" the name of the tag. This tag will be used to apply the new firewall rule onto whichever instance you'd like. Then, make sure the instances have the network tag applied.
Set Source IP ranges to allow traffic from all IPs: 0.0.0.0/0
To allow incoming TCP connections to port 9090, in "Protocols and Ports", check “tcp” and enter 9090
Click Create (or click “Equivalent Command Line” to show the gcloud command to create the same rule)
Update Please refer to docs to customize your rules.
Here is the command-line approach to answer this question:
gcloud compute firewall-rules create <rule-name> --allow tcp:9090 --source-tags=<list-of-your-instances-names> --source-ranges=0.0.0.0/0 --description="<your-description-here>"
This will open the port 9090 for the instances that you name. Omitting --source-tags and --source-ranges will apply the rule to all instances. More details are in the Gcloud documentation and the firewall-rule create command manual
The previous answers are great, but Google recommends using the newer gcloud commands instead of the gcutil commands.
PS:
To get an idea of Google's firewall rules, run gcloud compute firewall-rules list and view all your firewall rules
This question is old and Carlos Rojas's answer is good, but I think I should post few things which should be kept in mind while trying to open the ports.
The first thing to remember is that Networking section is renamed to VPC Networking. So if you're trying to find out where Firewall Rules option is available, go look at VPC Networking.
The second thing is, if you're trying to open ports on a Linux VM, make sure under no circumstances should you try to open port using ufw command. I tried using that and lost ssh access to the VM. So don't repeat my mistake.
The third thing is, if you're trying to open ports on a Windows VM, you'll need to create Firewall rules inside the VM also in Windows Firewall along with VPC Networking -> Firewall Rules. The port needs to be opened in both firewall rules, unlike Linux VM. So if you're not getting access to the port from outside the VM, check if you've opened the port in both GCP console and Windows Firewall.
The last (obvious) thing is, do not open ports unnecessarily. Close the ports, as soon as you no longer need it.
I hope this answer is useful.
Creating firewall rules
Please review the firewall rule components [1] if you are unfamiliar with firewall rules in GCP. Firewall rules are defined at the network level, and only apply to the network where they are created; however, the name you choose for each of them must be unique to the project.
For Cloud Console:
Go to the Firewall rules page in the Google Cloud Platform Console.
Click Create firewall rule.
Enter a Name for the firewall rule.
This name must be unique for the project.
Specify the Network where the firewall rule will be implemented.
Specify the Priority of the rule.
The lower the number, the higher the priority.
For the Direction of traffic, choose ingress or egress.
For the Action on match, choose allow or deny.
Specify the Targets of the rule.
If you want the rule to apply to all instances in the network, choose All instances in the network.
If you want the rule to apply to select instances by network (target) tags, choose Specified target tags, then type the tags to which the rule should apply into the Target tags field.
If you want the rule to apply to select instances by associated service account, choose Specified service account, indicate whether the service account is in the current project or another one under Service account scope, and choose or type the service account name in the Target service account field.
For an ingress rule, specify the Source filter:
Choose IP ranges and type the CIDR blocks into the Source IP ranges field to define the source for incoming traffic by IP address ranges. Use 0.0.0.0/0 for a source from any network.
Choose Subnets then mark the ones you need from the Subnets pop-up button to define the source for incoming traffic by subnet name.
To limit source by network tag, choose Source tags, then type the network tags in to the Source tags field. For the limit on the number of source tags, see VPC Quotas and Limits. Filtering by source tag is only available if the target is not specified by service account. For more information, see filtering by service account vs.network tag.
To limit source by service account, choose Service account, indicate whether the service account is in the current project or another one under Service account scope, and choose or type the service account name in the Source service account field. Filtering by source service account is only available if the target is not specified by network tag. For more information, see filtering by service account vs. network tag.
Specify a Second source filter if desired. Secondary source filters cannot use the same filter criteria as the primary one.
For an egress rule, specify the Destination filter:
Choose IP ranges and type the CIDR blocks into the Destination IP ranges field to define the destination for outgoing traffic by IP address ranges. Use 0.0.0.0/0 to mean everywhere.
Choose Subnets then mark the ones you need from the Subnets pop-up button to define the destination for outgoing traffic by subnet name.
Define the Protocols and ports to which the rule will apply:
Select Allow all or Deny all, depending on the action, to have the rule apply to all protocols and ports.
Define specific protocols and ports:
Select tcp to include the TCP protocol and ports. Enter all or a comma delimited list of ports, such as 20-22, 80, 8080.
Select udp to include the UDP protocol and ports. Enter all or a comma delimited list of ports, such as 67-69, 123.
Select Other protocols to include protocols such as icmp or sctp.
(Optional) You can create the firewall rule but not enforce it by setting its enforcement state to disabled. Click Disable rule, then select Disabled.
(Optional) You can enable firewall rules logging:
Click Logs > On.
Click Turn on.
Click Create.
Link:
[1] https://cloud.google.com/vpc/docs/firewalls#firewall_rule_components
You'll need to add a firewall rule to open inbound access to tcp:9090 to your instances. If you have more than the two instances, and you only want to open 9090 to those two, you'll want to make sure that there is a tag that those two instances share. You can add or update tags via the console or the command-line; I'd recommend using the GUI for that if needed because it handles the read-modify-write cycle with setinstancetags.
If you want to open port 9090 to all instances, you can create a firewall rule like:
gcutil addfirewall allow-9090 --allowed=tcp:9090
which will apply to all of your instances.
If you only want to open port 9090 to the two instances that are serving your application, make sure that they have a tag like my-app, and then add a firewall like so:
gcutil addfirewall my-app-9090 --allowed=tcp:9090 --target_tags=my-app
You can read more about creating and managing firewalls in GCE here.
I had the same problem as you do and I could solve it by following #CarlosRojas instructions with a little difference. Instead of create a new firewall rule I edited the default-allow-internal one to accept traffic from anywhere since creating new rules didn't make any difference.
console.cloud.google.com >> select project >> Networking > VPC network >> firewalls >> create firewall.
To apply the rule to VM instances, select Targets, "Specified target tags", and enter into "Target tags" the name of the tag. This tag will be used to apply the new firewall rule onto whichever instance you'd like.
in "Protocols and Ports" enter tcp:9090
Click Save.
Run this command to open port
gcloud compute --project=<project_name> firewall-rules create firewall-rules --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:<port number> --source-ranges=0.0.0.0/0
I had to fix this by decreasing the priority (making it higher). This caused an immediate response. Not what I was expecting, but it worked.