how corda node to do high available? (1. Container way or 2. Corda way)?
could I install the second jetty server with same restful interface to rpc invoke the same client node as well the first jetty server for flow?
future plan ( like mesos auto deploy in multiple machine)?
JMX monitor active MQ messages, queues, topic?
Jetty performance or function set as it embed into core code, sample if I want to set a filter that very hard to not touch core project? Performance parameter how to set?
JPA Custom Query if have set a flag let it not involve Vault table for offline chain to use?
How to design sample for a table to associate a foreign key to another table?
Shell script how to communicate with Corda Node?
This is a lot of questions! forgive me if I've misunderstood them here but I'll do my best to answer them.
not exactly sure what you mean here but corda has built in redundancy features and you can run corda nodes in containers as well (here's an example repo doing so: https://github.com/EricMcEvoyR3/corda-docker-compose)
So we now recommend using spring boot servers to build an HTTP server on top of a corda node. Take a look here for an example : https://github.com/corda/samples-java/tree/master/Advanced/obligation-cordapp
Here's a link to the corda planned features page: https://www.corda.net/platform-roadmap/ it's not super thorough but it covers the basics.
Corda has some JMX integrations you can find out more about them here: https://docs.corda.net/docs/corda-os/4.4/node-administration.html#monitoring-via-jolokia
Corda jetty webservers are basically deprecated, here's another blog post on how to migrate off of them: https://www.corda.net/blog/spring-cleaning-migrating-your-cordapp-away-from-the-deprecated-corda-jetty-web-server/
The corda docs have a whole page on the JPA and how it's supported, you can find more here: https://docs.corda.net/docs/corda-os/4.4/api-persistence.html
This depends entirely on your cordapp and the data you're trying to model. It's not something that's easy to help with here, I will say however that usually it's not necessary for most cordapps to do this.
You can use the corda shell to communicate with the corda node, more info on that here: https://docs.corda.net/docs/corda-os/4.4/shell.html
Related
When you have so many services, let's say more than 50 services and you want to know the relation between services, for instance you want to know which services have hard dependency on each other and which have soft dependency on each other, meaning when a service goes down and doesn't function anymore which other services will not work.
Basically for providing a level of High availability and uptime (SLA) we also need this.
and also when a new person joins the team there should be some kind of documentation to see what services we have and how is the dependency tree.
what kind of relation they have ? just messaging through a message broker or direct requests ?
Are those services working in just test environment or prod environment ? or both ?
What tools and softwares are there to help us cover this.
I am just now also looking into this question. Especially modelling dependencies between services.
In my opinion you can go into two directions
Service Registry
Configuration Management DB
There are a lot of full blown Service Registries out there, which do are not only static documentation, but services als register themselves with the service registry and others can discover the service to user there automatically.
Eureka related to Netflix Open Source Stack (Netflix OSS)
Consul
Zookeeper
Etcd
For me the Service Registry option might be to complex and I am going into the the direction of a simple CMBD
My company is currently evaluating hyperledger(fabric) and we're using it for our POC. It looks very promising and we're targeting rolling out to production in next few months.
We're targeting AWS as our production environment.
However, we're struggling to find good tutorial/practices/recommendations about operating hyperledger network in such environment.
I'm aware that Cello is aiming to solve/ease deploying/monitoring hyperledger network but i also read that its not production ready yet. Question is, should we even consider looking at Cello at this point?
If not, what are our alternatives? Docker swarm, kubernetes?
I also didn't find information about recommended instance types. I understand this is application and AWS specific but what are the minimal system requirements
(memory&CPU&network) for example for 'peer' node (our application is not network intensive, nor a lot of transactions will be submitted per hour/day, only few of them per day).
Another question is where to create those instances on AWS from geographical&decentralization point of view. Does it make sense all of them to be created in same region? Or, we must create instances running in different regions?
Tnx a lot.
Igor.
yes, look at Cello.. if nothing else it will help you see the aws deployment model.
really nothing special..
design the desired system, peers, orderer, gateways, etc..
then decide who many ec2 instance u need to support that.
as for WHERE (region).. depends on where the connecting application is and what kind of fault tolerance you need for your business model.
one of the businesses I am working with wants a minimum of 99.99999 % availability. so, multi-region is critical. its just another ec2 instance with sockets open from different hosts..
aws doesn't provide much in terms of support for hyperledger. they have some templates which allow you to setup the VMs initially, but that's stuff you can do yourself as well.
you are right, the documentation is very light and most of the time confusing. I got to the point where I can start from scratch with a brand new VM and got everything ready and deploy my own network definition and chaincode and have the scripts to do that.
IBM cloud has much better support for hyperledger however. you can design your network visually, you can download your connection profiles, deploy and instantiate chaincode, create and join channels, handle certificates, pretty much everything you need to run and support such a network. It's light years ahead of AWS. They even have a full CI / CD pipepline that you could replicate for your own project. if you look at their marbles demo, you'll see what i mean.
Cello is definitely worth looking at, with the caveat that it's incubation meaning, not real yet, not production ready and not really useful until it becomes a fully fledged product.
I am using WSO2 APIM 1.10.0 on a single server deployment and would like to move to a clustering one. Looking at this documentation I could found a lot of information, howevre something is boring me; do I really have to always do all of it?
I mean, I don't want to split all my workers in multiple instances, all I want is configure two full setup configurations (key manager + publisher + store + gateway), each one on its own host and make sure I can put a load balance in front of it.
Thre requiremenst are simple: I would like to share the load on both of them, and guarantee a better availability in case of one of the hosts goes down. Is it a MUST break down the whole installation on both nodes so I have to start each component independently with offset ports configured?
I coud see that on version 2.0.0 a lot have been simplified, any way to reach the same on 1.10.0 one?
Regards
Splitting into profiles is not mandatory. This is designed in this way to scale API Manager based on the TPS. If you have a low TPS count and prefer to have 2 node HA setup, you can do the following.
Cluster the two nodes using wka, aws, etc.
Use dep-sync to share API artifacts between two nodes.
Use one node as the Publisher. You need to handle the publisher node traffic using single node. This is to avoid getting SVN conflicts.
You can serve API requests from both nodes.
You do not want to always use the same deployment pattern mentioned in the docuemtnation that you have pointed there. There are various Other deployment patterns that you can use according to the scalability and the requirement of yours.
Please refer the following documentation [1] for different deployment patterns you can use for WSO2 API Manager and [2] for more information on worker Manager separation and Load balancing.
[1] https://docs.wso2.com/display/CLUSTER44x/API+Manager+Deployment+Patterns
[2] https://docs.wso2.com/display/CLUSTER44x/Separating+the+Worker+and+Manager+Nodes
I've started my journey with cloud related technologies very recently. I'm trying to understand the basics as to be able to prepare the foundation for a basic cloud setup in my Internet of Things oriented company.
While browsing the Internet I've stumbled upon the following two groups of open source projects:
WSO2 / Mule / ...
OpenStack / CouldStack / Eucalyptus / ...
I'm trying to understand:
what kind of service do they offer? (IaaS, PaaS, SaaS, other?)
what are the differences between them?
what do they have in common?
how do the play with other cloud related technologies like Amazon AWS?
which one would you recommend to get some basic experience and for some early proof-of-concept? (I'm looking for the easiest option first)
Cloud stack and Open stack are open source softwares designed to manage, deploy virtual machines and networks which can deliver cloud services. Mainly these provide Infrastructure as a Service (IaaS). There are alot of comparisons on the internet on these two. So these softwares needs to be intalled on your hardware and maintain it and you provide a cloud service from it. When it comes Amazon AWS it is a readily available service where you don't do installations or maintain hardware, you just take service from them.
WSO2 and MuleSoft are different from above two and they are software platforms where several products(such as ESB). Both provide cloud platform facilities to deploye their products.
We cannot say which one to use but base on your requirements you may choose one or two (WSO2 products deployed on Amazon AWS or WSO2 products deployed on CloudStack VM's). Since you are willing to set up Internet of things, i think you may need to refer about products provided by above providers. Following source [1] will give you an idea about Iot platform setup by several free open source WSO2 products.
[1] http://wso2.com/landing/internet-of-things/
I'm trying to install and configure a highly availability setup for the WSO2 API Manager. I've been reading through this document: http://docs.wso2.org/wiki/display/Cluster/Clustering+API+Manager and in there it explains to break up the 4 components of the application into separate folders and that these 4 components can run on a single server. I'm not sure why this is needed. All I really want to do is take 2 servers, install the full application on both of them (without breaking the application up into 4 different pieces) and cluster them together between two servers with an Elastic Load Balancer in front of them.
What is the purpose of splitting up the multiple components on the same server if they all run out of a single installation? I'm looking for the simplest way to provide fail over capability to this application if one server goes down. Any insight into their methodology would be greatly appreciated.
Thanks.
The article you've linked describes on distributing different components of API Manager. If you look at the very end of that article there's a link to clustering configuration doc. In a production deployment usually it is encouraged that the 4 components are run on different nodes rather than having everything in a node and having multiple such nodes. That's why it goes on explaining breaking it down to separate components. The official AM doc below has a page on different deployment patterns.
You can go through the following articles to get a better understanding on clustering API Manager.
http://docs.wso2.org/wiki/display/AM140/Clustered+Deployment
http://sanjeewamalalgoda.blogspot.com/2012/09/how-do-clustering-and-enable-replicate.html
My 2cts:
The documentation mentioned in the remarks, explains how WSO2 sees the world of clustering. Spread the different functionality over different JVM's. This sounds logical from architectural point of view. A dis-advantages is that the diffent applications need to me administrated as well by operations. This makes the technical architecture rather complex.
In our situation, we defined 2 different servers with extra CPU and memory, on these servers we have installed the full WSO2 API Manager and defined the cluster configuration. Everything provisioned via Puppet.
Just a straightforward install, all data-source pointing to one schema in an Oracle database.
And...it is working; Our Developers happy, Operations happy, Architect department happy