I'm new to Google Cloud Platform and looking for a solution to handle APIs to different environments.
Currently I have an API domain name (e.g. api.company.com) mapped to a GCP load balancer which then distributes requests to google computer engines. This is all setup in one GCP project which is the prod1 environment.
I want to create another prod environment called prod2 as another project. Rather than switching the DNS, I am looking for a way that I can easily reroute api.company.com to prod2 and also maintain non public endpoints for backend apis.
Can I use Google CloudEndpoints to do this? Ideally I would like to set this up in a separate project which can then access the prod1 and prod2 load balancers? If this is achievable, can I have the load balancers non public facing?
Any recommendation or best practice advice would be much appreciated.
Today, I think there is no GCP product that allows you to do this easily.
Cloud Endpoint have 2 problems:
You can't define path wild card. I mean, if you define /path -> prod 1; and you call /path/customer your query won't be routed because /path/customer is not defined. At the end, you have to define all your paths into Cloud Endpoint
Here come the second problem: You can't, for now, aggregate several API spec file. Therefore you will need to maintain 1 global file for Prod and for your tests, with the risk of wrong update and outage in production.
Optionally, you can imagine this architecture as workaround
Deploy compute engine into another project (prod2) for serving your test API
Create a VPC peering between the 2 project
Create another route into your load balancer of prod1 project for reaching the peered network of prod2
I never tried this kind of architecture, but it should work.
Related
Is there a way to load test every component of an AWS solution using Distributed Load Testing? https://aws.amazon.com/solutions/implementations/distributed-load-testing-on-aws/
I have an AWS serverless ecommerce solution, which has a step function(which has a few lambda functions), an API gateway and RDS. I want to load test the solution at different endpoints like load the step function, then load the API gateway so on and so forth.
So, I've deployed https://aws.amazon.com/solutions/implementations/distributed-load-testing-on-aws/ and I am facing 2 issues:
To test the entire solution I have the target URL for the S3 bucket that is the entry point to the solution, now the problem is that the authentication key and password are cycled every week. So, I have to keep updating script with the latest key id and password. Is there a way for me to use some other mechanism like have a jenkins authorised user and integrate it with the distributed load testing(DLT) solution OR some other way to keep the entire process automated without compromising the security?
Secondly, I have to load test endpoints that do not have external URLs like the step function (there is an async lambda that initiates the step function) and in order to send payload to the step function through DLT I need a target URL. Is it even possible to load test in such a scenario? If yes, how? I have tried using serverless artillery but again it needs a target URL.
Load Testing
So if I understand your question correctly, you're looking for ways to load-test your AWS setup. Well you're using serverless technologies which are scalable by default. So if you load-test the environment, most probably you'll reach the service limits depending on the load you generate. All these limits are already documented well in AWS documentation.
Load testing only makes sense (to me) when you're using EC2 instances (or Fargate) and want to know how many instances you need for particular load. Or how much time it takes for system scaling to work.
Database
To load test your RDS you don't have to load test all the components of your setup. You can independently load test it using JMeter or any other tool.
Other Consideration
If you're going with distributed Load testing then you have to notify AWS beforehand. Your distributed load might trigger few DDoS like alarms in AWS.
I have a question that is confusing me a little. I have a project locked down at the org level through a perimeter fence. This is to whitelist ip ranges to access a cloud storage bucket as the user has no ability to authenticate through service accounts or api's and requires a streaming of data.
This is fine and working however I am confused about how to open up access to serverless enviroments aswell inside gcp. The issue in question is cloud build. Since introduction of the perimeter I can no longer run cloud build due to violation of vpc controls. Wondering can anyone point me in the direction of how to enable this as obviously white listing the entire cloud build ip range is not an option?
You want to create a Perimeter Bridge between the resources that you want to be able to access each other. You can do this in the console or using gcloud as noted in the docs that I linked.
The official documentation mention that if you use VPC service controls, some services are not supported, for example, Cloud Build, for this reason the problem started right after you deployed the perimeter.
Hi all so the answer is this.
What you want to do is set up one project that is locked down by vpc and has no api's available for ingestion of the ip white listed storage bucket. Then you create a 2nd project that has a vpc but does not disable cloud storage api's etc. Now from here you can read directly from the ip whitelisted cloud storage bucket in the other project.
Hope this makes sense as I wanted to share back to the awesome guys above who put me on the right track.
Thanks again
Cloud Build is now supported by VPC Service Controls VPC Supported products and limitations
I have created a Spring cloud microservices based application with netflix APIs (Eureka, config, zuul etc). can some one explain me how to deploy that on AWS? I am very new to AWS. I have to deploy development instance of my application.
Do I need to integrate docker before that or I can go ahead without docker as well.
As long as your application is self-contained and you have externalised your configurations, you should not have any issue.
Go through this link which discusses what it takes to deploy an App to Cloud Beyond 15 factor
Use AWS BeanStalk to deploy and Manage your application. Dockerizing your app is not a predicament inorder to deploy your app to AWS.
If you use an EC2 instance then it's configuration is no different to what you do on your local machine/server. It's just a virtual machine. No need to dockerize or anything like that. And if you're new to AWS, I'd rather suggest to to just that. Once you get your head around, you can explore other options.
For example, AWS Beanstalk seems like a popular option. It provides a very secure and reliable configuration out of the box with no effort on your part. And yes, it does use docker under the hood, but you won't need to deal with it directly unless you choose to. Well, at least in most common cases. It supports few different ways of deployment which amazon calls "Application Environments". See here for details. Just choose the one you like and follow instructions. I'd like to warn you though that whilst Beanstalk is usually easier then EC2 to setup and use when dealing with a typical web application, your mileage might vary depending on your application's actual needs.
Amazon Elastic container Service / Elastic Kubernetes Service is also a good option to look into.
These services depend on the Docker Images of your application. Auto Scaling, Availability cross region replication will be taken care by the Cloud provider.
Hope this helps.
I am trying to access Kafka and 3rd-party services (e.g., InfluxDB) running in GKE, from a Dataflow pipeline.
I have a DNS server for service discovery, also running in GKE. I also have a route in my network to access the GKE IP range from Dataflow instances, and this is working fine. I can manually nslookup from the Dataflow instances using my custom server without issues.
However, I cannot find a proper way to set up an additional DNS server when running my Dataflow pipeline. How could I achieve that, so that KafkaIO and similar sources/writers can resolve hostnames against my custom DNS?
sun.net.spi.nameservice.nameservers is tricky to use, because it must be called very early on, before the name service is statically instantiated. I would call java -D, but Dataflow is going to run the code itself directly.
In addition, I would not want to just replace the systems resolvers but merely append a new one to the GCP project-specific resolvers that the instance comes pre-configured with.
Finally, I have not found any way to use a startup script like for a regular GCE instance with the Dataflow instances.
I can't think of a way today of specifying a custom DNS in a VM other than editing /etc/resolv.conf[1] file in the box. I don't know if it is possible to share the default network. If it is machines are available at hostName.c.[PROJECT_ID].internal, which may serve your purpose if hostName is stable [2].
[1] https://cloud.google.com/compute/docs/networking#internal_dns_and_resolvconf [2] https://cloud.google.com/compute/docs/networking
Total NOOB question. I want to setup a website on google cloud compute platform with:
static IP/IP range(external API requirement)
simple front-end
average to low traffic with a maximum of few thousand requests a
day.
separate database instance.
I went through the documentation of services offered Google and Amazon. Not fully sure what is the best way to go about it. Understand that there is no right answer.
A viable solution is:
Spawn up an n1-standard instance on GCP (I prefer to use Debian)
Get a static IP, which is free if you don't let it dangling.
Depending upon your DB type choose Cloud SQL for structured data or Cloud Datastore for unstructured data
Nginx is a viable option for web-server. Get started here
Rest is upon you. What kind of stack are you using to build your app? How are you gonna deploy your code to instance? You might later wanna use Docker and k8s to get flexibility between cloud providers and scaling needs.
The easiest way of creating the website you want would be Google App Engine with the Datastore as DB. However it doesn't support static IP's, this is due to a design choice. Is this absolutely mandatory?
App Engine does not currently provide a way to map static IP addresses
to an application. In order to optimize the network path between an
end user and an App Engine application, end users on different ISPs or
geographic locations might use different IP addresses to access the
same App Engine application. DNS might return different IP addresses
to access App Engine over time or from different network locations.