Is it possible to create Service Fabric Container Service based on a saved image (TAR file)? - visual-studio-2017

Our customer runs a Service Fabric cluster onprem with servers not allowed to access the internet. I' trying to create a container service for Service Fabric using Visual Studio 2017. In the wizard for creating a new project there is an option to provide an Image Name of the container. Many examples points to Docker Hub Repository.
So I'm looking for a way to develop and deploy a container service without internet access. Is there a way to point to my own container image (local TAR file) during project creation?

Related

How to deploy a .war stored in Google cloud storage to a Compute Engine VM instance using a Cloud Function?

My current Approach-
I'm currently using Cloud Build to build and store a .war artifact in a GCS bucket. To deploy it on my custom VM, I'm running a java program on this GCE vm which detects changes to the bucket using Pub/Sub notification and downloads and deploys the fresh .war on the vm.
My Objective-
Download a ~50MB Spring boot 2.X + Java 11 war from GCS using a cloud function written in Java 11
Upload it to the VM(Ubuntu 18.04.x LTS) using cloud function(Generation not relevant)
Deploy it on the VM from the cloud function(The war has an embedded tomcat container, so I only have to java -jar it)
My issue-
Connecting to the VM "externally" is my main issue. The only solution I can think of is running a Spring web service endpoint on the vm which receives a .war using POST. I'll use the cloud function to POST the downloaded .war to this endpoint which will deploy it on the vm.
However, this approach seems like a Rube Goldberg machine from my perspective so I'm wondering if there is better idea than what I've come up with.
P.S- We're aware that pulling from the VM is the more sound approach, but this cloud function deployment is a client request so sadly, we must abide.

Connect two cloud run services on GCP

I need to deploy two cloud run servicen on GCP, one is frontend and the other is backend so I wanna ask
is it possible to connect 2 services like this ones?
if its possible what is the best way of connecting those two services which will be able to communicate?
I searched through the internet didn't find a lot of useful info
Please consider the official documentation :
Securing Cloud Run services tutorial
This tutorial walks through how to create a secure two-service
application running on Cloud Run. This application is a Markdown
editor which includes a public "frontend" service which anyone can use
to compose markdown text, and a private "backend" service which
renders Markdown text to HTML.
Yes you i'm not going to go into details. but i will give you a quick overview of the workflow.
Assuming you don't have the code in source control and this is an already built docker containers.
you want to load the docker image via gcloud by using
docker load to load the docker .tar image.
next you will tag that image.
push the image to container registry
navigate to cloud web-console click create service or you can run gcloud run image on the cli.
if you need a db its much better to use the cloud sql assuming its postgresql. you want to create one beforehand in the same region.
during deployment you can click connection tab and attach your db instance set the container port to your listening port.
don't for get to like if it helps!

how can we run Neptune graph database on docker

how can we run Neptune graph database on docker
Since Neptune DB has been productized recently it is not available on Localstack can someone guide me how to deploy AWS Neptune DB Service into docker container
You don't deploy neptune, you deploy a client application which uses an appropriate client library to access neptune. The neptune software/hardware is managed by AWS and you can't access it except via API.
My guess is that you're attempting to create a local Neptune compatible docker container (i.e. some docker container with a compatible API). This would be similar to using minios when performing local integration testing with S3. If this is indeed what you're in search of I'd recommend using tinkerpop's gremlin-server image. This should get the job done for you since Neptune uses Gremlin for its query language.
For now, I found only one way. It is Pro version of Localstack. It contains Neptune DB. https://localstack.cloud/features/ Unfortunately free version of test container does not support DB interface. =(
Neptune is a fully managed graph database, and not a binary that can be independently deployed in your personal containers or infrastructure. You can run your client application in your own custom docker containers, and setup your network such that the container makes requests to the managed Neptune cluster that you have created.
Hope this helps.

Spring Boot microservice deployment in Docker

I need to develop a Spring Boot microservice and need to deploy in Docker. Now I developed a sample microservice. When I am learning Docker and container deployment I found many documentations for installing Docker and building images and running the application as container packaging. Here I have some doubts in deployment procedure:
If I need to deploy 4 Spring Boot microservice in Docker, do I need to create separate image for all? Or can I use the same Docker file in all my Spring Boot microservices?
I am using PostgreSQL database. So can I include that connection into Docker image file? Or I need to manage separately?
If you have four different Spring Boot applications, I suggest creating four different Dockerfiles, and building four different images from those files. Basically put one Dockerfile in each Spring application folder.
You can build PostgreSQL credentials (hostname, username & password) into the application by writing it in the code. This is easiest.
If you use AWS and ECS (Elastic Container Service) or EC2 to run your Docker containers you could store the credentials in the EC2 Parameter Store, and have your application fetch them at startup, however this takes a bit more AWS knowledge and you have to use the AWS SDK to fetch the credentials from the application. Here is a StackOverflow question about exactly this: Accessing AWS parameter store values with custom KMS key
Ask Question
There can be one single image for your all micro-services but its not a good design and not suggested. Always try to decoupled the things one from another. In your case, create separate images(separate Dockerfile) for each micro-service.
Again same thing for your second question, create a separate image(one Dockerfile) for your database as well. And for the credentials, you can follow the Jonatan's suggestion.

Is it possible to deploy Docker containers using Netflix's Spinnaker?

I wonder if Spinnaker (http://spinnaker.io) can be used for docker container deployment?
What we do is:
Poke the repo
If the code is new there - we build 3 containers (nginx, django app container, fluentd logger container)
we are spinning up fluentd container in order to collect the logs from the rest 2 containers and send it to Splunk/AWS Cloudwatch Logs
we want to spin up django app container, on the same host - nginx container (as a proxy to Django container) [and forward the logs into fluentd ]
we forward (map) the certain json file with the app configuration ito the django container
Unfortunately Spinnaker has too few examples, the example they have here shows only how to bake the image with the certain DEB package inside.
We do have jenkins jobs which can poll the repo, test the code, create and upload the docker container into the private registry and deploy the containers using ansible. The question is if we can use Spinnaker in order to do that natively?
there is currently no container support in Spinnaker. Google is actively working on adding Kubernetes support. But there is currently no plans to integrate Spinnaker directly with either docker or ecs.
One thing we tried and worked was to use Jenkins to build and publish a debian wrapper for the docker image that was created. All that this debian does is to pull and start the docker container for a spinnaker service. We then created a spinnaker pipeline that bakes this debian and then deploys it.