Update Apache to version 2.4.50 on Amazon Linux - amazon-web-services

I am having some issues installing Apache 2.4.50 on amazon Linux, originally, I had an Amazon Linux AMI and tried to do Sudo yum update httpd -y
I wanted this to update the Apache version to the latest, but it only went as far as 2.4.48 and said it was the latest version,
while looking into this I saw that the Amazon AMI was now deprecated and was in maintenance mode only.
I decided to spin up a new server using Amazon Apache2 thinking this would solve my issues, but once I installed the LAMP and ran HTTPD -v
it still showed ‘Package httpd-2.4.48-2.amzn2.x86_64 already installed and latest version’
I am aware that there is a critical issue with 2.4.48 CVE-2021-40438 and issue Apache 2.4.49 has a new zero-day critical vulnerability associated with it that can lead to a path traversal attack, so I need to get to 2.4.50.
Any help on how I can update to this version would be gratefully appreciated

Related

Google cloud compute engine - disable automatic updates (centos)

I wonder if there is a way to disable automatic updates of our Linux machines on Google Cloud (yum update)
As far as I know during maintenance window our servers get new packages of software installed. (I checked yum.log). Since our installed software must be specific version (not latest) we don't want Google to run updates for us because it usually breaks all kind of dependencies...
I have searched on Google but didn't find any info about that.
Thanks.
The centOS 7 image used in Compute Engine includes the yum-cron installed and enabled by default. You can verify it by either using one of the following commands:
sudo yum list installed yum-cron
sudo systemctl status yum-cron.service
The yum-cron will periodically check for updates and apply them if there are updates available.
Solution
If you have yum-cron running on your instance, you can disable auto-updates by accessing the configuration file /etc/yum/yum-cron.conf. Then change the following variables to ‘no’:
update_messages = no
download_updates = no
apply_updates = no
This will prevent the system from updating automatically.
As an alternative, you can opt for uninstalling the package on your system using the following command.
sudo yum remove yum-cron
This part is missing in the official documentation so It will be added soon.

How do I update my h2o version on AWS working with flow?

I installed h2o using the AMI on the marketplace. It installed 3.14, and I am trying to update the version to the latest stable one of h2o.ai so my co-workers can use flow. How can I best do this?
I have tried uninstalling using
pip install http://h2o-release.s3.amazonaws.com/h2o/rel-wolpert/4/Python/h2o-3.18.0.4-py2.py3-none-any.whl
And directly by SSH-ing into my instance via terminal. However, even if it says "successfully uninstalled", it seems to keep reverting to the version 3.14.
I suspect there is a script at startup that is reinstalling and loading 3.14, but I can't figure it out. Any help is appreciated.

Ambari-agent "CERTIFICATE_VERIFY_FAILED", Is it safe to disable the certificate verification in Python?

Ambari version: 2.2.2.18
HDP stack: 2.4.3
OS: centos 7.3
Issue description:
Ambari-server can't communicate with Ambari agent. I can see below error in the ambari-agent logs:
ERROR 2017-09-18 06:35:34,684 NetUtil.py:84 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-09-18 06:35:34,684 NetUtil.py:85 - SSLError: Failed to connect. Please check openssl library versions.
I am facing this issue recently and it appears this can be replicated consistently after the instances are restarted. (I am using EC2 instances).
I am able to register agent nodes successfully, install HDP cluster, run yarn jobs etc.. no problem at all. Once i restart my instances, I see this problem.
There are some solutions already posted for this problem like:
Downgrade the Python from 2.7 to lower. This is a known problem of
Ambari with Python 2.7
Control the certificate verification by disabling it.
Set "verify = disable"; under /etc/python/cert-verification.cfg
I don't want to play with Python as it can disrupt lot many things like Cassandra, yum package manager etc...
Second work around is very much easy and it works well!
Now comes my question :- Is it safe to disable the certificate verification in Python ? i.e. by setting property verify = disable
Generally, it's a bad idea. If somebody has access to port on server that is used for agent-server communication (8443 if I'm not mistaken), he can register as agent and get all your cluster configs&passwords. Or classic man-in-the-middle attack would allow to do the same by reading your unencrypted traffic. A bit more difficult attack would allow to send commands to agents (probably with root permissions).
Your issue sounds like you reprovisioned your ambari-server host, and left old ambari-agent instances running, or maybe your certificates became outdated? At first connection to ambari-server, agents generate certificates and send to server. Server signs these certificates with it's own key, so now server-agent connection is encrypted. Did you try to remove old certificates and restart server&agents as suggested here?
How did we investigate this issue and What solution we adopted:
Investigation Details:
Downgrading to Python 2.6 is not feasible as there are OS dependencies and as per Suggestion from 'Dmitriusan' in the previous comment, it's not a good idea to disable certificate verification in Python.
We use AWS EC2
With Python 2.7, JDK 1.8 and Cent OS 7.2 there is no issue. Everything is smooth.
With Python 2.7, JDK 1.8 and Cent OS 7.3 and Centos 7.4 we are seeing this issue.
Issue which I have reported here, is with respect to Centos 7.3 and with Centos 7.4 Issue is slightly different. Certificate verification fails while adding nodes to the cluster itself.
Downgrading from centos 7.3 to 7.2 is not straight forward. And AWS EC2 market place provides Centos 7.0 Image and when we create instance from this image, it applies security and patch updates resulting in Centos 7.3.
We can create our own Image of Centos 7.2 from existing servers but, It's always good to be with the latest update for the OS for security reasons.
To describe it shortly, we had workarounds but not a solution.
Solution which we adopted:
After series of tests, we decided to upgrade to Centos 7.4, HDP-2.6.3.0, and Ambari 2.6.0.0
With Centos 7.4 and Ambari Version 2.6.0.0, we don't see this issue even though I have 'Python 2.7.5' installed.
So this looks to be an Issue with Ambari
Older version of Ambari (2.4.2) does not recognize the force TLS configuration. We upgraded Ambari to 2.6.2 and heart beat started working.

Installing Jenkins on AWS EC2

I am running into an issue with the install wizard with Jenkins when following a Set Up a Jenkins Build Server tutorial from Amazon.
My EC2 instance is a t2.small. It was a t2.micro until I saw this SO post so I switched it to a t2.small. It doesn't appear to be a memory issue. I am getting an error when creating my initial user or trying to Continue as admin.
When inspecting the element, trying to Save and Finish when creating an initial user, POST http://<domain>:8080/setupWizard/createAdminUser errors out with a ERR_CONNECTION_RESET error. (I don't see anything in /var/log/jenkins/jenkins.log about this failure either)
I am running java 1.8 and I've tried with Jenkins 2.71-1.1 and Jenkins 2.61-1.1
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
I grepped the error log and only found one log pertaining to errors (but I'm not sure this is related):
Jul 24, 2017 11:09:50 PM hudson.ExtensionFinder$GuiceFinder$FaultTolerantScope$1 error
INFO: Failed to instantiate optional component hudson.plugins.build_timeout.operations.AbortAndRestartOperation$DescriptorImpl; skipping
I created a CDK to provision a Jenkins service in AWS. have a try
https://github.com/seraphjiang/jenkinscdk
Install the certificates-
sudo apt install ca-certificates
then, Try Updating & Upgrading the packages-
sudo apt upgrade
sudo apt update
Then follow the link to install the Jenkins :)
https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-ubuntu-18-04

ngnix on amazon ec2 instance which has RedHat 4.4.4-13?

I have an ec2 instance running on amazon which has AMI(ami-1b814f72).Its running RedHat 4.4.4-13 version.
I want to install ngnix and gunicorn on with django. According to ngnix http://wiki.nginx.org/Install#Official_Red_Hat.2FCentOS_packages page I need to create a file /etc/yum.repos.d/nginx.repo and paste those line for finding repo.But they also mentioned that :
Due to differences between how CentOS, RHEL, and Scientific Linux
populate the $releasever variable, it is necessary to manually replace
$releasever with either "5" (for 5.x) or "6" (for 6.x), depending upon
your OS version.
But I don't have either 5 or 6 version. I have RedHat 4.4.4-13 version, so what should I do in that case to make it work and get installed ngnix on my ec2 instance.
If I dont change the baseurl and try to install ngnix I got this error:
http://nginx.org/packages/rhel/latest/x86_64/repodata/repomd.xml:
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
Trying other mirror. Error: Cannot retrieve repository metadata
(repomd.xml) for repository: ngnix. Please verify its path and try
again
Please note: I want to be under AWS free Usage Tier and I don't want to be get charged
I hope someone will help me :(
So I solved my own problem and writing answer to my own QUESTION.Their is no available ngnix package for RHEL 4.4.Either we build from source specifically for RHEL 4.4 or just migrate to an updated version of AMI on amazon.I moved to ubuntu 11.10 which is updated one and currently supported by ubuntu community.