I want to fake an enterprise environment with OpenShift Origin V3 to test some stuff.
I'm going to try the advanced installation with multiple masters, etcds and multiple nodes.
https://docs.openshift.org/latest/install_config/install/advanced_install.html
I already did the quick installation once (running OpenShift in a container) and I did the advanced installation a few times (one host which contains a master + a node, and some nodes).
First of all, I'm installing the whole environment on AWS EC2 instances with CentOS7 as OS. I have 2 masters (master1.example.com and master2.example.com) and 3 nodes (node1.example.com, node2.example.com, ...)
I want to seperate my masters and nodes. So containers and images will only be on the nodes. (So no host which contains a master and a node).
My masters needs to be HA. So they will use a virtual IP and pacemaker. But how do I have to configure this? There are some tutorials to use pacemaker with apache. But there is nothing that describes the configuration of pacemaker and vip for using it in OpenShif.
great news, I had to deal with pacemaker as well but now Pacemaker is not the native HA method for Openshift anymore(from v3.1). So we can get rid of the tedious pcs configuration and fencing tunning.
Now ansible installation playbook's take care of multiple masters installation, with what they called Openshift HA native method. No additional setup is required for a standard configuration.
Openshift HA method takes advantage of etcd to select the active leader every 30s(by default)
There is a method to migrate from Pacemaker to native HA
https://docs.openshift.com/enterprise/3.1/install_config/upgrading/pacemaker_to_native_ha.html#install-config-upgrading-pacemaker-to-native-ha
Related
I am using django-clamd (https://github.com/vstoykov/django-clamd) inside my django app. The scanning works fine on my local system while uploading files and virus containing files are detected.
I want to achieve the same for my app pod in kubernetes environment. Any ideas how to setup clam antivirus as a single instance(one pod per node in the cluster) in k8s so that apps deployed in different namespaces can be scanned using the clamav? I do not want to setup a separate instance of clamav for every app deloyment as a single clam av instance needs around 1 GB of RAM.
Can somebody please help me? I configure three masters following the guide of digital ocean and I'm trying to access the Mesos and Marathon interface using my host only adapter address, but it just says the site cant be reached and refused to connect
Thanks for asking your question. First of all, the document you are referencing from Digital Ocean is from 2014 and while DC/OS can run on Ubuntu it is not a supported operating system for this product. There are also concerning suggestions in this article that I would avoid (such as only have a 2 Mesos Master quorum which it looks like you already noticed). Lastly, the company itself is called Mesosphere and the product is called DC/OS :)
With all that out of the way, since 2014 much has changed in DC/OS and the Digital Ocean document is obsolete. You no longer have to manually configure Zookeeper, cluster quorum size, Marathon, Mesos, edit host files on agents, or any of the other items this document references for a production cluster. All of that is taken care of in a YAML configuration file called config.yaml if you attempt an "advanced" install.
You should have more success with connecting to the UIs (DC/OS, Marathon, and Mesos) by attempting an installation method that is up-to-date, on a tested and supported OS, and using the latest versions of DC/OS provided by Mesosphere. This will remove the difficulties you are seeing from the obsolete documentation.
https://docs.mesosphere.com/1.11/installing/oss/
https://docs.mesosphere.com/1.11/installing/oss/custom/system-requirements/
https://docs.mesosphere.com/1.11/installing/oss/custom/configuration/configuration-parameters/
https://docs.mesosphere.com/version-policy/
Hope this helps and you see success with DC/OS!
***Edit, if you want to avoid DC/OS, use the Mesos advanced course https://open.mesosphere.com/advanced-course/
I has a question related to network setup of additional peer in Hyperledger Fabric.
I want to add 2 more peers to existing peer to form a network ,but all available document is all about connection peer together by using Docker. In my case I already have multiple servers. so I just directly install peers to 3 different Servers separately ,but how can I connect that 3 peer up and running together. I cannot find any document related to this.
Hopp you can guide.
To install Fabric directly to Server is recommend or not?
Below is Screenshot of one peer. it's up and running fine. but How to connect 3 peers together directly on different Server without using Docker.
I have follow this link: https://github.com/hyperledger/fabric/blob/master/docs/Setup/Network-setup.md
but I still can't find the way since it used docker.
any file that can modify to make it such as core.yaml ..etc
enter image description here
When using Hyperledger Fabric version 0.6, there is not a straightforward way to connect multiple peers on different servers without using Docker. As you noted, the Setting Up a Network section covers how to use Docker Compose to link peers together. One of the primary uses of a Hyperledger Fabric version 0.6 network is to learn how to develop chaincode. The focus isn't so much on dynamically allowing peers to join a network.
There are a few options for creating a blockchain network for Hyperledger Fabric version 0.6.
Published Docker images
Setting up a development environment
An instance of the Blockchain service on Bluemix can be created.
Hyperledger Fabric 1.0 (currently under development) aims to make it easier for different entities to join a blockchain network. An early preview of related concepts were covered during a Connect-A-Thon event. There is also an article about this event.
Yes. We can create a blockchain network without docker using Hyperledger Fabric v0.6. We have been done it in practice. For sure, we have to use docker to deploy chaincode.
For Hyperledger Fabric v0.6, both with docker or without it, we can not add more peers to existing network.
Please let me know if this question is more appropriate for a different channel but I was wondering what the recommended tools are for being able to install, configure and deploy hadoop/spark across a large number of remote servers. I'm already familiar with how to setup all of the software but I'm trying to determine what I should start using that would allow me to easily deploy across a large number of servers. I've started to look into configuration management tools (ie. chef, puppet, ansible) but was wondering what the best and most user friendly option to start off with is out there. I also do not want to use spark-ec2. Should I be creating homegrown scripts to loop through a hosts file containing IP? Should I use pssh? pscp? etc. I want to just be able to ssh with as many servers as needed and install all of the software.
If you have some experience in scripting language then you can go for chef. The recipes are already available for deployment and configuration of cluster and it's very easy to start with.
And if wants to do it by your own then you can use sshxcute java API which runs the script on remote server. You can build up the commands there and pass them to sshxcute API to deploy the cluster.
Check out Apache Ambari. Its a great tool for central management of configs, adding new nodes, monitoring the cluster, etc. This would be your best bet.
I have an Amazon EC2 instance that I'd like to use as a development server for client projects as well as run JIRA. I have a domain pointed to the EC2 server IP. I'm new to docker so unsure if my approach is correct.
I'd like to have a JIRA container installed (with another jiradb MYSQL container) running at jira.domain.com as well as the potential to host client staging websites at client.domain.com which point to the client's docker containers.
I've been trying to use This JIRA docker image using the provided command
docker run --detach --publish 8080:8080 cptactionhank/atlassian-jira:latest
but the container always stops running mid setup (set up takes a while in-between steps). When I run the container again it goes back to the start of setup.
Once I have JIRA set up how would I run it under a subdomain? And how could I then have client.domain.com point to a separate docker container?
Thanks in advance!
As you probably know there's two considerations for getting Jira setup, whether as server or container:
You need to enter a license key early in the setup process (and it requires an Internet connection for verification), even if it's an evaluation
By default Jira will use its built-in (H2, IIRC) database, unless you configure an external one
So, in the case of 2) you probably want to make sure you have your external database ready and set up.
See Connecting Jira applications to external databases for preparatory steps for a variety of databases.
You didn't mention at what stage your first setup run fails, however once you've gotten past step 1) or any further successful setup, one of the first things I did, so as not to lose all work I'd done, was to commit the container!
docker commit -a 'My Name' -m 'Jira configured and set up' <container ID> myrepo/myjira:mytag
That way you don't lose all your previous work and you save your container into a new image in one fell swoop.