How to Deploy a docker container with volume in Cloud Run - google-cloud-platform

I am trying to publish an application I wrote in .NET Core with docker and a mounted volume. I can't really figure out or see any clear solution to my issue that will be cheap (Its for a university project.)
I tried running a docker-compose via a cloudbuild.yml linked in this post with no luck, also tried to put my dbfile in a firebase project and tried to access it via the program but it didn't work. I also read in the GCP documentation that i can probably use Filestore but the pricing is way out of budget for me. I need to publish an SQLite so my server can work correctly, that's it.
Any help would be really apreciated!

Basically, you can't mount volume in Cloud Run. It's a stateless environment and you can't persist data on it. You have to use external storage to persist your data. See the runtime contract
WIth the 2nd execution runtime environment, you can now mount Cloud Storage bucket with GCSFuse, and Filestore path with NFS

Related

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops into local kubernetes cluster (using kubeadm)?

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.

Where to keep the Dataflow and Cloud composer python code?

It probably is a silly question. In my project we'll be using Dataflow and Cloud composer. For that I had asked permission to create a VM instance in the GCP project to keep the both the Dataflow and Cloud composer python program. But the client asked me the reason of creation of a VM instance and told me that you can execute the Dataflow without the VM instance.
Is that possible? If yes how to achieve it? Can anyone please explain it? It'll be really helpful to me.
You can run Dataflow pipelines or manage Composer environments in you own computer once your credentials are authenticated and you have both the Google SDK and Dataflow Python library installed. However, this depends on how you want to manage your resources. I prefer to use a VM instance to have all the resources I use in the cloud where it is easier to set up VPC networks including different services. Also, saving data from a VM instance into GCS buckets is usually faster than from an on-premise computer/server.

Google Cloud Run / Mounting Google Storage Bucket

From a Google Cloud Run docker registry associated container, when I try to mount a Google Storage Bucket, the following is what I receive. Obviously without having a privileged docker execution this is expected, and as far as I have investigated, "Google Cloud Run" instances are not meant to support privileged container execution like Google Compute Engine.
Yet I am still asking if anyone has any other knowledge about this, is there any other way to mount a bucket via Google Run container ?
Opening GCS connection... Opening bucket... Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs:
mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
Posting this as a Community Wiki as it's based on the comments of #JohnHanley and #SuperEye:
Based on what you mentioned:
My docker images are not web services.
If this is the case, you cannot use Cloud Run for what you are trying to do. Cloud Run is an HTTP Request/Response system. Your container must respond to HTTP requests, otherwise it will be terminated.
Also, for your other comment:
Google Compute Engine cannot run docker images from Container Registry
That is an incorrect assumption. Compute Engine supports Container Registry.
In conclusion, for your final goal of mounting a bucket as a file system, Cloud Run does not support that ability. An alternative is to use App Engine Flex.

copy files during GCP instance creation from python

I am using the googleapiclient in python to launch VM instances. As part of that I am using the facility to run start up scripts to install docker and other python packages.
Now, one thing I would like to do is copy files to this instance ideally during the instance creation stage through python code.
What might be the way to achieve this? Ideally what would work is to be able to detect that the instance has booted and then be able to copy these files.
If I am hearing you correctly, you want files to be present inside the container that is being executed by Docker in your Compute Engine VM. Your Startup Script for the Compute Engine is installing docker.
My recommendation is not to try and copy those files into the container but instead, have them available on the local file system available to the Compute Engine. Configure your docker startup to then mount the directory from the Compute Engine into the docker container. Inside the docker container, you would now have accessibility to the desired files.
As for bringing the files into the Compute Engine environment in the first place, we have a number of options. The core story however will be describing where the files start from in the first place.
One common approach is to keep the files that you want copied into the VM in a Google Cloud Storage (GCS) bucket/folder. From there, your startup script can use GCS API or the gsutil command to copy the files from the GCS bucket to the local file system.
Another thought, and again, this depends on the nature of the files ... is that you can create a GCP disk that simply "contains" the files. When you now create a new Compute Engine instance, that instance could be defined to mount the disk which is shared read-only across all the VM instances.
First of all, I would suggest to use tool like Terraform or Google Deployment Manager to create cloud infrastructure instead of writing custom python code and handling all edge-cases by yourself.
For some reason, you can't use above tool and only Python program is an option for you the you can do following:
1. Create a GCS bucket using python api and put appropriate bucket policy to protect data.
2. Create a service account which has read permission to above GCS bucket.
3. Launch VM instance using python API and have your start-up script to install packages and run docker container. Attach above service account which has permission to read files from above GCS bucket.
3. Have a startup script in your docker container which can run ``gsutil` command to fetch files from GCS bucket and put at the right place.
Hope this helps.
Again, if you can use tools like Terraform, that will make things easy.

How to interface an AWS hosted website with Database on EBS?

The startup I'm working for is constructing a website, and we want to use AWS to run and host our site and the accompanying mysql database. Apparently when you terminate an AWS instance, any data stored on it is lost, so we would be keeping the database on the EBS system. The thing I can't figure out, though, is how to interface things running on these two different platforms. How do I tell the web server where the database is?
Sorry if this is a really noob question. Still trying to grasp how this whole cloud service works.
If I am reading correct, your DB is on the EBS mounted on the same machine, if that is the case, you have to make sure you tell MySQL (my.cnf) to point it's datadir to EBS directory.
Rest is as usual. Your host is localhost and your user credentials.
BTW, FYI, there is one more option from Amazon for DB that is RDS (http://aws.amazon.com/rds/) which provides lots of functionality + advantages, take a look at it.