I would like to deploy the time series database QuestDB on GCP, but I do not see any instructions on the documentation. Could I get some steps?
This can be done in a few shorts steps on Compute Engine. When creating a new instance, choose the region and instance type, then:
In the "Container" section, enable "Deploy a container image to this VM instance"
type questdb/questdb:latest for the "Container image"
This will pull the latest QuestDB docker image and run it on your instance when launching. The rest of the setup steps are setting firewall rules to allow networking on the ports you require:
port 9000 - web console & REST API
port 8812 - PostgreSQL wire protocol
Source of this info is an ETL tutorial by Gabor Boros which deploys QuestDB to GCP and uses Cloud Functions for loading and processing data from a storage bucket.
Related
I'd like to host an app that uses a database connection in an AWS Nitro enclave.
I understand that the Nitro enclave doesn't have access to a network or persistent storage, and the only way that it can communicate with its parent instance is through the vsock.
There are some examples showing how to configure a connection from the enclave to an external url through a secure channel using the vsock and vsock proxy, but the examples focus on AWS KMS operations.
I'd like to know if it's possible to configure the secure channel through the vsock and vsock proxy to connect to a database like postgres/mysql etc...
If this is indeed possible, are there perhaps some example cofigurations somewhere?
Nitrogen is an easy solution for this, and it's completely open source (disclosure I'm one of the contributors to Nitrogen).
You can see an example configuration for deploying Redis to a Nitro Enclave here.
And a more detailed blog post walkthrough of deploying any Docker container to a Nitro Enclave here.
Nitrogen is a command line tool with three main commands:
Setup - Spawn an EC2 instance, configure SSH, and establish a VSOCK proxy for interacting with the Nitro Enclave.
Build - Create a Docker image from an arbitrary Dockerfile, and convert it to the Enclave Image File (EIF) format expected by Nitro.
Upload your EIF and launch it as a Nitro Enclave. You receive a hostname and port which is ready to proxy enclave requests to your service.
You can setup, build, and deploy any Dockerfile in a few minutes to your own AWS account.
I would recommend looking into Anjuna Security's offering: https://www.anjuna.io/amazon-nitro-enclaves
Outside of using Anjuna, you could look into the AWS Nitro SDK and use that to build a networking stack to utilize the vsock or modify an existing sample.
I need to deploy two cloud run servicen on GCP, one is frontend and the other is backend so I wanna ask
is it possible to connect 2 services like this ones?
if its possible what is the best way of connecting those two services which will be able to communicate?
I searched through the internet didn't find a lot of useful info
Please consider the official documentation :
Securing Cloud Run services tutorial
This tutorial walks through how to create a secure two-service
application running on Cloud Run. This application is a Markdown
editor which includes a public "frontend" service which anyone can use
to compose markdown text, and a private "backend" service which
renders Markdown text to HTML.
Yes you i'm not going to go into details. but i will give you a quick overview of the workflow.
Assuming you don't have the code in source control and this is an already built docker containers.
you want to load the docker image via gcloud by using
docker load to load the docker .tar image.
next you will tag that image.
push the image to container registry
navigate to cloud web-console click create service or you can run gcloud run image on the cli.
if you need a db its much better to use the cloud sql assuming its postgresql. you want to create one beforehand in the same region.
during deployment you can click connection tab and attach your db instance set the container port to your listening port.
don't for get to like if it helps!
I have created a docker image for DRUID and Superset, now I want to push these images to ECR. and start an ECS to run these containers. What I have done is I have created the images by running docker-compose up on my YML file. Now when I type docker image ls i can see multiple images running in them.
I have created an aws account and created a repository. They have provided the push command and I push the superset into the ECR for start. (Didn't push any dependancy)
I created a cluster in AWS, in one configuration step if provided custom port 8088. I don't know what and why they ask these port for.
Then I created a load balancer with the default configuration
After some time I could see the container status turned running
I navigated to the public ip i mentioned with port 8088 and could see superset running
Now I have two problems
It always shows login error in a superset
It stops automatically after some time and restarts after that and this cycle continues.
Should I create different ECR repos and push all the dependencies to ECR before creating a cluster in ECS?
For the service going up and down. Since you mentioned you have an LB associated with the service, you may have an issue with the health check configuration.
If the health check fails consecutively a number of times, ecs will kill it and re-start it.
how can we run Neptune graph database on docker
Since Neptune DB has been productized recently it is not available on Localstack can someone guide me how to deploy AWS Neptune DB Service into docker container
You don't deploy neptune, you deploy a client application which uses an appropriate client library to access neptune. The neptune software/hardware is managed by AWS and you can't access it except via API.
My guess is that you're attempting to create a local Neptune compatible docker container (i.e. some docker container with a compatible API). This would be similar to using minios when performing local integration testing with S3. If this is indeed what you're in search of I'd recommend using tinkerpop's gremlin-server image. This should get the job done for you since Neptune uses Gremlin for its query language.
For now, I found only one way. It is Pro version of Localstack. It contains Neptune DB. https://localstack.cloud/features/ Unfortunately free version of test container does not support DB interface. =(
Neptune is a fully managed graph database, and not a binary that can be independently deployed in your personal containers or infrastructure. You can run your client application in your own custom docker containers, and setup your network such that the container makes requests to the managed Neptune cluster that you have created.
Hope this helps.
I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.