Disclaimer: I did post this on Server Fault, first, and the replies there were:
I'm voting to close this question as off-topic because we are not AWS support.
This question does not appear to be about server, networking, or related infrastructure administration within the scope defined in the help center.
I think this is a valid question, and even first-party support can be found on the Stack Exchange network. I think issues/limitations are easier to find on SO than on the multitude of AWS 'documentation'. This is why I'm posting this question on SO.
The issue/question
From what I've found on the AWS documentation and the limited subset of Apache ActiveMQ configuration elements, I haven't found how to use the Camel plugin that is supposed to be built into newer versions of ActiveMQ. I figure this is left out of the AmazonMQ version, or is blocked by the configuration limitations.
This is the list of available configuration elements. Their configuration document's root element is <broker>, and it looks like camel is supposed to be configured as a sibling to that node an a traditional ActiveMQ config file.
Camel is not supported today running within the Amazon MQ broker itself, however here is a blog showing how to use Camel with Amazon MQ.
https://aws.amazon.com/blogs/compute/integrating-amazon-mq-with-other-aws-services-via-apache-camel
The Camel "plugin" is actually simply an imported Spring configuration file that fires up Camel. AmazonMQ does not, as to my understanding, permit imported configuration files hence running an embedded Camel is not possible.
Related
ElasticSearch itself should be safe, because of the Java Security Manager settings. We're not using logging anyway, so even if those settings are disturbed, we might not be sending anything to the logger.
But Amazon has still issued a log4j patch for our instance -- after several days now. The patch (R20211203-P2) could just be upgrading to log4j2.15. Or maybe there's some other logger in the control plane we can't see that it is securing?
We have tried requests containing common exploit strings and we do not see any requests coming to our target.
Were we safe before patch R20211203-P2 arrived? Does anyone know what R20211203-P2 actually does? There are no release notes.
Amazon OpenSearch Service has released a critical service software update, R20211203-P2, that contains an updated version of Log4j2 in all regions. We strongly recommend that customers update their OpenSearch clusters to this release as soon as possible.
So yeah I would upgrade ASAP just in case.
As we move our workloads to AWS I am looking for an ETL tool which is widely used and has the appropriate connectors - Apache Camel appears to fit the bill. However, I am struggling to find information on how Camel can be deployed in AWS - the obvious one is on an EC2 instance, but we would like to avoid the setup and maintenance required by Virtual Machines. I don't see anyone offering it as a managed service, so the option I'd like to explore is running it as a container in ECS, as we will have numerous other containers running.
Containers don't seem to be an installation option on the Apache Camel website - perhaps it is just too limiting for a tool whose purpose is to connect to everything else?
Is it acceptable and practical to run Camel in a container, and where could I find more information about it?
Apache Camel appears to fit the bill.
Indeed the Apache Camel is a great integration framework. And that's the point. It is a framework, not a product. So there are multiple ways to run the Camel flows. As a web app, as a standalone app, as a part if our own code. Camel itself is pretty agnostic to the way you run the flows and that's why you don't have very specific way enforced in the web site.
If you want an out of box product, which can generate containerized deployments with Apache Camel, you could have a look at Apache ServiceMix, Apache Karaf or it's supported RedHat Fuse.
Is it acceptable and practical to run Camel in a container, and where could I find more information about it?
It is perfectly fine.
Question: Can you (are you able) to create a docker container with your (any other) application?. Based on the question this skill is lacking and I really suggest to learn it.
You may check folowing post https://medium.com/#wkrzywiec/how-to-put-your-java-application-into-docker-container-5e0a02acdd6b
FROM java:8-jdk-alpine
COPY ./target/myapp.jar /usr/app/myapp.jar
WORKDIR /usr/app
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "myapp.jar"]
Let's assume you can run your ETL tasks as a standalone application, then just run it in the container as any other standalone application.
it we would like to avoid the setup and maintenance required by Virtual Machines
Question: how do you distribute your camel tasks? I mean - what is result of your build? A war file? A standalone app?
To build a web app you could see https://www.baeldung.com/spring-apache-camel-tutorial
The most convenient way to deploy a war file in AWS is AWS Beanstalk service.
If you build a standalone application (or use servicemix) and you can build a container, then indeed ECS or Fargate seem as natural options.
I am performing the initial setup of WSO2 Identity Server for a small organization. We will not have a large number of transactions but we want high availability and reliability. We have decided on a two node “cluster” or “active/active” configuration. We have been testing with WSO2 IS v 5.3.0. I am having a problem sorting through all the documentation and determining which install documents to use.
I found this document for WSO2 IS v 5.2.0 that specifically covers “Clustering Identity Server.” This document references a detailed database setup procedure that appears to be out of date. It also covers the configuration of Hazelcast along with editing other XML files. The clustering install guide is located here: https://docs.wso2.com/display/CLUSTER44x/Clustering+Identity+Server+5.1.0+and+5.2.0
Then I found this newer install guide that covers the “active/active” configuration. This document is titled “Deployment for Small and Medium-sized Enterprises” and is located here https://docs.wso2.com/display/IS5xx/Deployment+for+Small+and+Medium-sized+Enterprises#DeploymentforSmallandMedium-sizedEnterprises-Active/Active
This new document contains a very simple procedure for setting up an active/active configuration that looks like it will meet our needs. My concern is that this second document does not cover any of the specific database setup that is covered in the clustering document. This second document does not cover the hazelcast setup or other clustering configurations. My guess is the "active/active" setup is not the same as the "clustered" architecture.
Can someone clarify the difference between the "clustered" and "active/active" architecture?
Deployment pattern 1 in this document is your requirement as I understood. It explains how to configure a High available clustered deployment. There is no difference in the two terms.
I think this is the documentation you are looking for.
We are trying to deploy a web job via octopus. We have different eventhub keys saved in the variables and we expect the webjob to pick up the right key depending on the environment that it is being deployed to. Has one one done this before? Any advice on settings up configurations in octopus?
<========== UPDATE ===========>
We were being careless and didn't quite set our octopus process to transform the Configuration Variables. You should be able to do so by clicking 'configure variables' in the process step.
I don't think it being deployed via Octopus is all that relevant here. Generally, a .NET WebJob is able to access Azure App Setting using standard configuration API.
If that is not working for you, please update your question to clarify what you tries, and specifically what didn't work.
I am looking into migrating my parse.com app to parse server with either AWS or Heroku.
The primary frustration I encountered with Parse in the past has been the resource limits
https://parse.com/docs/cloudcode/guide#cloud-code-resource-limits
Am I correct in assuming that following a migration the resource limits will be dependant on the new host (i.e. AWS or Heroku)?
Yes. Parse Server is simply a nodejs module which means that wherever you choose to host your nodejs app will decide which resource limits that will be imposed. You might also be able to set them yourself.
I recently moved it to AWS , so yes as stated in a comment its just a nodejs module so you have complete control over it. So mainly constraints here will be cpu , i/o and network of AWS. I would suggest reading the documentation provided here https://github.com/ParsePlatform/parse-server , they have also mentioned which ec2 instances we should take so that we can scale node and mongo properly.