Total NOOB question. I want to setup a website on google cloud compute platform with:
static IP/IP range(external API requirement)
simple front-end
average to low traffic with a maximum of few thousand requests a
day.
separate database instance.
I went through the documentation of services offered Google and Amazon. Not fully sure what is the best way to go about it. Understand that there is no right answer.
A viable solution is:
Spawn up an n1-standard instance on GCP (I prefer to use Debian)
Get a static IP, which is free if you don't let it dangling.
Depending upon your DB type choose Cloud SQL for structured data or Cloud Datastore for unstructured data
Nginx is a viable option for web-server. Get started here
Rest is upon you. What kind of stack are you using to build your app? How are you gonna deploy your code to instance? You might later wanna use Docker and k8s to get flexibility between cloud providers and scaling needs.
The easiest way of creating the website you want would be Google App Engine with the Datastore as DB. However it doesn't support static IP's, this is due to a design choice. Is this absolutely mandatory?
App Engine does not currently provide a way to map static IP addresses
to an application. In order to optimize the network path between an
end user and an App Engine application, end users on different ISPs or
geographic locations might use different IP addresses to access the
same App Engine application. DNS might return different IP addresses
to access App Engine over time or from different network locations.
Related
i have to make a web application which a maximum of 10,000 concurrent users for 1h. The web server is NGINX.
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
can you suggest a correct deployment on AWS?
LoadBalancer on 2 or more EC2?
If so, which EC2 sizing do you recommend? Better to use Autoscaling?
thanks
thanks for your answer. The application is 2 page PHP and the impact is minimal because in PHP code i write only 2 functions that checks user/password and token.
the video is provided by Wowza CDN because is live streaming, not on-demand.
what tool or service do you suggest about the stress test of Web Server?
I have to make a web application which a maximum of 10,000 concurrent users for 1h.
Avg 3/s, it is not so bad. Sizing is a complex topic and without more details, constraints, testing, etc. You cannot get a reasonable answer. There are many options and without more information it is not possible to say which one is the best. You just started NGINX, but not what it's doing (static sites, PHP, CGI, proxy to something else, etc.)
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
I will just lay down a few common options:
Let's assume it is a single static (another assumption) web page referring an external resource (video). Then the simplest and the most scalable solution would be an S3 bucket hosting behind the CloudFront (CDN).
If you need some simple quick logic, maybe a lambda behind a load balancer could be good enough.
And you can of course host your solution on full compute (ec2, beanstalk, ecs, fargate, etc.) with different scaling options. But you will have to test out what is your feasible scaling parameters or bottleneck (io, network CPU, etc.). Please note that different instance types may have different network and storage throughput. AWS gives you an opportunity to test and find out what is good enough.
I want the hostname in my Managed CloudRun service to be MyServiceName.RevisionName.InstanceId or anything better than "localhost" which I am getting now.
Is this possible ?
Cloud Run is a serverless managed compute platform, meaning that it is precisely built to abstract away all the infrastructure management. The container instances on which Cloud Run services run are ephemeral, meaning that your Cloud Run services will not be mapped to a specific static instance ID. Setting the hostname as you describe on your question will not be possible.
Depending on the nature of the application you can follow one of two possible ways:
Follow one of the suggestions already given on the comments (generate and save an UUID as a variable to the running container's scope so it can serve as an identifier during the container's lifespan). Which I assume would be the best workaround given the simplicity of creating UUIDs. Here are some examples on how to generate UUIDs programatically using Python, JavaScript, and C# given by the Stackoverflow community.
Migrate the container application from Cloud Run services to a Compute Engine VM instance with a custom hostname.
The metadata server provides some attributes to uniquely identify your service instance and correlate it to logs and other information sources.
See cloud run specific attributes and the [metadata server docs](https://cloud.google.com/compute/docs/storing-retrieving-metadata]
I have a webapp and database that aren't hosted on any cloud service, just on a regular hosting platform.
I need to build an API to read and write to that database and I want to use cloud functions to do so. Is it possible to connect to a remote databases from cloud functions (such as AWS Lambdas or Google cloud functions) even when they're not hosted that cloud service?
If so, can there be problems with doing so?
Cloud Functions are just Node.js code that runs in a managed environment. This means your code can do almost anything that Node.js scripts can do, as long as you stay within the restrictions of that environment.
I've seen people connect to many other database services, both within Google Cloud Platform and outside of it. The main restriction to be aware of there, is that you'll need to be on a paid plan in order to be able to call APIs that are not running on Google Cloud Platform.
Yes it's possible.
If so, can there be problems with doing so?
There could high latency if the database is in a different network. Also, long-lived database connection pools don't really work well in these environments due to the nature of the functions being created and destroyed constantly. Also, if your function reaches a high level of concurrency you may exhaust the number of available connections on your database server.
You could use FaaS the same as your web service hosted on any web server or cloud server.
You have to be careful with the duration of your call to DB because FasS functions are limited in time (15 min for AWS Lambda and 9 min on Google) and configure a firewall properly on your DB server.
A container of your lambda function could be reused, you could use some tricks with it - Best Practices for AWS Lambda Container Reuse
But you can't be sure that nothing changed during the work of your service.
You could read some good advice about it there - https://stackoverflow.com/a/37524237/182344
PS: Azure functions have always on setting, but I am not sure how pooling will work in this case.
Yes you can access on premise resources from Serverless products.
Please check this detailed tutorial where you can find 3 methods to achive your goal link:
Connecting using a VPN
Connecting using a Parner interconnect
Connecting using Interconnect solution
I'm new in cloud environment (Google Cloud)..
Currently I have more than 10 different products of php application software.
I have website where users can register and create their own subdomain name...
Every time users register on my website, I create the VM manually and point the subdomain to the VM manually...
When users registering on my website is increasing, it become very hard to manually add the VM and point the DNS one-by-one
What in my mind is can we automate the process? if possible how to do that?
What is the best method for this?I heard about container and kubernetes...
all information, help and suggestion is appreciated...thank you
you can use code as infrastructure like terraform is there.
you can run terraform php.
Refer more about it here : https://github.com/aol/terraform-php
Where you can set everything and it will also spin up VM behalf of you known as code as infra.
I'm a bit confused as to understanding the various offerings that google cloud has.
Is it basically like this:
Google app engine is fully managed servers, you push the code and it runs.
They have a servers that you manage yourself, you choose the sizes and spin them up and push code manually
Servers that run docker containers for you.
Is that a high level offering of google cloud in terms of the application servers? (excluding their managed services for db, caching etc).
Have a look at this:
https://cloud.google.com/docs/choosing-a-compute-option
There are also cloud functions that belong to the compute group of GCP:
https://cloud.google.com/functions/docs/