docker overlay for local development and remote containers - amazon-web-services

I am looking to improve our development cycle, using multiple docker containers, used by multiple dev teams
Currently each dev team is responsible of few services, that are dependant on other teams services. Meaning all dev teams need to run all containers locally
What i'm trying to figure out is how can a local container be exposed to a remote network on a remote cluster, that each team will join its network, without the need of running all the services locally

One possible solution is using ssh tunnel sharing the docker.sock file => registered services will be exposed to other machines
ssh -nNT -L /tmp/docker.sock:/var/run/docker.sock <USER>#<IP> &
and
export DOCKER_HOST=unix:///tmp/docker.sock

Related

Deploying Apache Cloudstack with vSphere/vCenter

For a group project in one of my university IT classes, each group is given 3 servers and the professor wants us to get an Apache CloudStack environment running using those three. While initially vague on instructions, he later informed us that we should install the ESXi hypervisor on all 3 of our servers and go from there.
We first installed ESXi on all 3 of our servers. Then we installed vCenter server on one of them in order to combine all the computing resources by adding each as a host in a cluster before we start setting up CloudStack. What we are about to do next is install the CloudStack Management server on a VM created in vCenter server.
I was reading the CloudStack documentation before we start the installation which is where my question stems from. The documentation mentions that a host should not have any running VMs on them before getting added to CloudStack. Here is the exact text:
Ideally clusters that will be managed by CloudStack should not contain any other VMs. Do not run the management server or vCenter on the cluster that is designated for CloudStack use. Create a separate cluster for use of CloudStack and make sure that they are no VMs in this cluster.
So my question is, does that include the management server VM? If it does, would that mean we have to make a separate cluster for just the host server that contains the management server? Cause if that's the case, we can't use any of the other resources on that server that is running the management server. Or does it mean that you can but it's just not recommended?
On top of that, the documentation also mentions the following:
Put all target ESXi hypervisors in dedicated clusters in a separate Datacenter in vCenter.
So would I have to put the ESXi host containing vCenter Server and CloudStack Management Server in both a separate datacenter and cluster?

How do I pass Eureka service ip to my java application running in a docker container?

I have number of java and python services running in docker containers in clustered environment. I'm using Eureka for service discovery and it works fine locally with Eureka ip address hardcoded in application configuration files. I have problem with flexible configuration of Eureka service for Java services - docker containers with the services will be deployed in three environments where Eureka will have different ip addresses.
Is there a way to pass Eureka URI using e.g. JVM environment variable?
Or if I pass the URI as an application argument, how can I get it propagated to the Eureka client configuration?
PS: I use AWS ECS and due to number of services and existing AWS constraints I cannot put all docker containers in a single task definition, cannot use docker names resolving and just hardcode Eureka hostname, on the other hand I might have multiple Eureka instances and would like to specify which one particular container should use.
The answer to this my question would be to use configuration server, description of this beast can be found here https://dzone.com/articles/using-spring-config-server.

When using hyperedger with virtualbox, can i have some instances at one network?

I am a beginner in a block chain. So there are many questions.
When configuring the hyperledger network, create multiple ubuntu instances using virtualbox on one pc. Can the peers within each instance be connected in a single block-chain network?
Thanks in advance for your availability.
You are correct in identifying that the Peers need to communicate with each other. In the default Development Fabric, all the docker containers are running on a single machine and the network addressing/routing is managed by Docker Compose. If you split your fabric to separate Virtualbox Ubuntu instances, you will have to understand and manage the network addressing/routing. This is a Docker and networking issue, not really a Fabric or Composer issue. You may find that Kubernetes is the most helpful way forward for you or Docker Swarm.
For that, you have to create your fabric network and then add everyone to the network. You need to configure each and every peer in the channel config using configtxgen tool.

Pre-deploy development communication with an Internal Kubernetes service

I'm investigating a move to Kubernetes (coming from AWS ECS). But I haven't solved the local development issue when depending on internal services.
Let me elaborate:
When developing and testing microservices, before they are deployed as a Kubernetes Service I want to be able to talk to other, internal Kubernetes Services. As there are > 20 microservices I have a Kubernetes cluster running latest development versions. I can't run a MiniKube.
example:
I'm developing an user-service which needs access to the email service. The Email service is already on Kubernetes and is an internal service.
So before the user-service is deployed I want to be able to talk to the internal email service for dev/testing. I can't make use of K8S nice service discovery env vars.
As we currently already have a VPN up to restrict DEV env to testers/development only, could I use this VPN to provide access to the Kubernetes-Service IP-addresses? I do have Kubernetes DEV-env on the same VPC as the VPN is in.
If you deploy your internal services as type NodePort, then you can access them over your VPN via that nodePort. NodePorts can be dynamically allocated or you can customize them to be 'static' where they are known by you up front.
When developing an app on your local machine, you can access the dependent service by that NodePort.
As an alternative, you can use port-forwarding from kubectl (https://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/) to forward a pod to your local machine. (Note: This only handles traffic to a pod not a service).
Telepresence (http://telepresence.io) is designed for this scenario, though it presumes developers have kubectl access to the staging/dev cluster.

Can any Bluemix application run on AWS? or Local server?

I know this kind of question is basic concept of bluemix but, I just worder if I develop an application on public bluemix using certain runtime, such as Node.js or Liberty, can this application run on my own Local server or AWS?
Is it depends on bluemix-provided services that I bind to application?
or If I install cloudfoundry on my local server or AWS cloud host, can the application run without any problem or issue?
thank you
You can try out Lattice. It will allow you to run your own local or AWS hosted minimal cloudfoundry runtime. This will allow you to run your applications. If the services you are talking to are publicly accessible ie. Have a publicly routable host and port, then you can expose them as environment variables in your CF app manifest and reach out to them from your own CF or you could look at user provided services. You will need to upload buildpacks to your lattice/cf installation that are not a part of the standard installation.
DISCLAIMER: Lattice is useful during development and NOT recommended for production use. You should setup the entire cloudfoundry for that.