I am very confused what relationship between Maistra and Istio?
I view their github repository, and be always confused which one is parent project, which one is derived?
Maistra is an opinionated distribution of Istio designed to work with Openshift.
Source: https://maistra.io/
In other words: Maistra is a clone of Istio, configured in a way that it fits best for an Openshift environment. They have a dedicated page where maistra is compared with istio, that states:
The modifications to Maistra are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift or OKD.
Source: https://maistra.io/docs/comparison-with-istio/
So you can think of Istio (https://istio.io) as the parent istio main project and Maistra as an Istio project dedicated for OpenShift deployment.
And thus https://github.com/istio/istio is the parent project and https://github.com/maistra/istio is derived from it in some way.
Related
As we move our workloads to AWS I am looking for an ETL tool which is widely used and has the appropriate connectors - Apache Camel appears to fit the bill. However, I am struggling to find information on how Camel can be deployed in AWS - the obvious one is on an EC2 instance, but we would like to avoid the setup and maintenance required by Virtual Machines. I don't see anyone offering it as a managed service, so the option I'd like to explore is running it as a container in ECS, as we will have numerous other containers running.
Containers don't seem to be an installation option on the Apache Camel website - perhaps it is just too limiting for a tool whose purpose is to connect to everything else?
Is it acceptable and practical to run Camel in a container, and where could I find more information about it?
Apache Camel appears to fit the bill.
Indeed the Apache Camel is a great integration framework. And that's the point. It is a framework, not a product. So there are multiple ways to run the Camel flows. As a web app, as a standalone app, as a part if our own code. Camel itself is pretty agnostic to the way you run the flows and that's why you don't have very specific way enforced in the web site.
If you want an out of box product, which can generate containerized deployments with Apache Camel, you could have a look at Apache ServiceMix, Apache Karaf or it's supported RedHat Fuse.
Is it acceptable and practical to run Camel in a container, and where could I find more information about it?
It is perfectly fine.
Question: Can you (are you able) to create a docker container with your (any other) application?. Based on the question this skill is lacking and I really suggest to learn it.
You may check folowing post https://medium.com/#wkrzywiec/how-to-put-your-java-application-into-docker-container-5e0a02acdd6b
FROM java:8-jdk-alpine
COPY ./target/myapp.jar /usr/app/myapp.jar
WORKDIR /usr/app
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "myapp.jar"]
Let's assume you can run your ETL tasks as a standalone application, then just run it in the container as any other standalone application.
it we would like to avoid the setup and maintenance required by Virtual Machines
Question: how do you distribute your camel tasks? I mean - what is result of your build? A war file? A standalone app?
To build a web app you could see https://www.baeldung.com/spring-apache-camel-tutorial
The most convenient way to deploy a war file in AWS is AWS Beanstalk service.
If you build a standalone application (or use servicemix) and you can build a container, then indeed ECS or Fargate seem as natural options.
I am looking into Cloud Run to host my new app, and I am wondering if it is possible to generate a separate URL for each git branch.
I use Netlify to host my other app. When it is connected to GitHub (or other VCS services), it builds the source code in a branch and deploy it to a URL that is specific to the branch.
Can it be done easily or do I have to write some logic?
Or do you think AWS amplify or some other services are of better fit?
The concept of Cloud Run and URLs is quite simple:
https://<service-name>-<project hash>.<region>.run.app
If your project and region are the same for all the branches, you simply have to deploy a different service for each branch to get a different URL.
That was for Cloud Run. Now, I'm not sure that Netlify is compliant with Cloud Run. I found no documentation on this.
This answer won't be directly useful to you but I think it's relevant and worth mentioning
The open source Knative API (and implementation actually exposes a "tag" feature while splitting the traffic between multiple revisions: https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#traffictarget
This feature is not currently supported on Cloud Run fully managed, but it will be.
By tagging releases this way, you could define tag: v1 and tag: v2 in your traffic configuration, and you would get new URLs like:
https://v1-SERVICE_NAME...run.app
https://v2-SERVICE_NAME...run.app
that directly go to these specific versions.
And the interesting thing is, these revisions you specified in the traffic: block of the Service object do not have to receive any traffic (you can say traffic percentage: 0) but it would still create a domain name like I showed above to the inactive revisions of your app.
So when Cloud Run fully-managed supports tag fields, you can actually achieve this, although it will be less out-of-the-box experience than Netlify.
I want to implement a tool that uses the technologies - Jenkins, Docker, Docker Swarm, and AWS - to achieve a deployment tool that our team of developers can use to manage instances and deploys.
Please recommend what technologies should we (both administrators and developers) be using, what needs to be built and what sorts of machines must be having.
Any help here would be much appreciated.
Your question is too generic to provide a specific answer, as there are different approaches to implement what you are trying to achieve. IMHO the best approach would be to talk with your existing dev team & administrators and come up with a solution which all parties find easy to manage and maintain container based environment rather than specifying several specific technologies.
Each tool you have mentioned has different capabilities and also there are other tools that provide the same features which would be more ideal for your situation. (Thats why proper understanding between Devs and admins are necessary on what you really want to achieve.) .
Since you have asked about what kind of machines you must be having (I suppose this is on AWS env) try Core OS on AWS instances. CoreOS (Container Linux) will be the best option to manage and run your container based environments. [About CoreOS]
Jenkins can run in a docker container and issue docker commands to deploy new docker containers that reside in the same swarm as jenkins. You also need to hook into a software repo like git. Jenkins Blue Ocean is something you could look at for pipe-lining your dev->build->test->deploy->maintain pipes. Also, Travis-ci, github, JIRA, and Dockerhub are useful components to what you are trying to achieve.
I have got a running AMI Ubuntu instance with CI/CD pipeline with Jenkins, Tomcat, Selenium, Sonar, Nagios etc .I have used elastic ip directly in different configuration to get it working.
Is there any easy and direct way to export this image to my local Ubuntu apart from AWS export/import service.
For what it's worth I don't agree with the down votes. Cloud is fast becoming "infrastructure as code" and I place emphasis on the code aspect. As the differences between programming and infrastructure management begin to fade (see FaaS - functions as a service) there needs to be room for having that conversation in this forum IMHO.
That said, let's move on to your questions. AWS provides instance export functionality though this solution may not fit all use cases. Two links below illustrate how to perform that export. YMMV. Good luck!
http://docs.aws.amazon.com/cli/latest/reference/ec2/create-instance-export-task.html
http://docs.aws.amazon.com/vm-import/latest/userguide/vmexport.html
I would like to run WSO2 on two hosts, one serves as manager and the other as gateway worker.
I consulted the clustering guide and product profiles documentation, and I understand that after configuring the two hosts correctly, I can run the product with selected profile:
-Dprofile=gateway-manager on the manager node
-Dprofile=gateway-worker on the gateway worker node
In addition to perform selective-run, I would also like the gateway-worker to have the minimal possible deployment, i.e. to be installed only with artifacts it really needs.
Three options I can think of, from best to worst:
Download a minimized deployment package - in case there is one? In the site I saw only complete package which contains artifacts of all the components. Are there other download options which contain selective artifacts per profile?
Download the complete package and then remove the artifacts which are not necessary for gateway-worker (how do I know which files/directories to remove?)
Download the source from github and run a selective build? (which components should I build and how do I package them for deployment)?
There are no separate product packs for each profiles to download. So option 1 is not there. But you can do the option 2 to some extent. You can remove the publisher, store and admin-dashboard application from the product by removing 'jaggeryapps' folder in 'wso2am-1.10.0/repository/deployment/server/' location. Other than that we are not recommending to remove any components from the pack.
You can check the profile generation code for API Manager 1.10 in here. It only has module import definitions. These component are needed to be there for each profile.