AWS EC2 - FTP return error 451 when trying to upload new file - amazon-web-services

I am running two t2.medium EC2 servers on AWS. They are both launched from the same AMI and with similar settings, FTP (except passwords ofc) and locations. The only difference in the two servers is the content in the /var/www/html folder.
So far they have been working as expected but yesterday something weird started happening. Whenever I try to upload a new version of a (php) file on one of the servers, it fails and returns the error "server did not report OK, got 451". I've tried different FTP-users, different IDEs and rebooting my EC2-server without any luck. This only happens on one of the servers and it started happening "out of the blue"
Any suggestions how to fix this or at least in what direction I should continue my debugging?

The comment by #korgen lead me to the error log of the server. When I ran sudo less /var/log/secure I quickly saw the error message: error:
Failed to write to /var/log/btmp: No space left on device
I checked the storage volume by running the command df -h and I saw that 20.0 / 20.0 GB was in use. I increased the server volume size in AWS and after a quick reboot it all now works again.
I hope this helps a future lost soul :-)

Related

Jupyter internal API is not active - Vertex-AI jupyterlab error 524

I cannot access Jupyterlab by web interface (error 524). It still works by ssh. I've followed the support documentations, but nothing works.
My best guess is that the main issue is with the opened ports of docker.
The key problem is probably below:
curl http://127.0.0.1:8080/api/kernelspecs
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
And the following command simply restarts the service without error (but still inaccessible through web interface)
sudo service jupyter restart
Thanks!
EDIT: to clarify, all help from this article which specifically is supposed to fix error 524, doesn't work at all.
The diagnostic tool give this result, and the --repair doesn't work:
And "Verify that the Jupyter internal API is active" is completely useless as it doesn't explain how to fix the error!!
So I know there is a problem with the Jupyter internal API but no idea how to fix that.
EDIT 2:
On the web console, here is a screenshot:
I have gone through the same error, after upgrading the VM problem got solved all the Jupyter API are healthy try upgrading the VM. Before that take a snapshot of disk(upgrading might erase your VM).
How to upgrade the VM
As I mentioned in the comment a work around to resolve the issue can be by create a new instance keeping the old data.For this you can follow below steps:
Step 1: Create a new storage bucket and a new notebook.
Step 2: Copy the data to the newly created bucket by running the following command in the old notebook terminal.
"gsutil cp -R /home/jupyter/* gs://NEW_STORAGE_BUCKET_PATH"
Step 3: From the new managed notebook’s terminal, run the below command to copy the data to this new notebook .
"gsutil cp gs://NEW_STORAGE_BUCKET_PATH* /home/jupyter/"

aws-shell not working in Ubuntu 20, on AWS Lightsail

I've created AWS Lightsail instance with Ubuntu 20.04, installed python3 and pip3.
I installed AWS Shell tool using pip3 install aws-shell command.
However, when I try to run it, it hangs and outputs Killed after several minutes.
This is how it looks like:
root#ip-...:/home/ubuntu# aws-shell
First run, creating autocomplete index...
Killed
root#ip-...:/home/ubuntu# aws-shell
First run, creating autocomplete index...
Killed
On Metrics page of AWS Lightsail it shows CPU utilization spike in Burstable zone.
So I'm quite sad that this just wastes CPU quota by loading CPU for several minutes and doesn't work.
I've done the same steps on Ubuntu 16.0 on virtual machine and it worked there fine. So I'm completely lost here and don't know how can I fix it. Tried to google this problem and didn't find anything related.
UPD: also I've just tried to use python 2.7 version to install aws-shell, it still doesn't work. So it doesn't work for both python 3.8.5 and 2.7.18
The aws-shell tool should be used on local machine, instead of on AWS Lightsail instance.
I wish it had warning or info message about it, besides me knowing now that it was an incorrect endeavor.

NameNode keeps going down

I am having a problem with the NameNode status ambari shows. The following is happening:
- The NameNode keeps going down a few seconds after I start it through ambari (it looks like it never really goes up, but the start process runs successfully);
Despite being DOWN according to ambari, if I run JPS in the server the NameNode is hosted it shows that the service is running:
[hdfs#NNVM ~]$ jps
39395 NameNode
4463 Jps
and I can access NameNode UI properly;
I already restarted both the namenode and ambari-agent the manually but the behavior keeps the same;
This problem started after some HBase/Phoenix heavy queries that caused the namenode to go down (not sure if this is actually related but the exact same configurations were working well before this episode);
I've been digging for some hours and I am not being able to find error details in the namenode logs nor in the ambari-agent logs that allows me to understand the problem;
I am using HDP 2.4.0, Ambari 2.2.1.1 and no HA options.
Can someone help in this?
Thanks in advance
Edited: to add ambari version.

Upgrade PHP PDO_PGSQL version from 8.4 to 9.4 EC2 Instance

I have an EC2 Amazon Linux Ami instance running PHP server version 5.6.22, also PostgreSQL 9.4.6 installed.
Doing an echo phpinfo(); it give the following value for PDO_PGSQL library:
PostgreSQL(libpq) Version 8.4.20
This is causing that the app server throws error while trying to connect to the RDS instance with Postgres 9.5 due to the missmatching versions.
I have been looking to make that version to be 9.4 or 9.5. Until now, I had done severals reinstall, trying dealing with repositories, but without results.
EDIT:
The reported version for psql command is: psql (PostgreSQL) 9.4.6
Its very old but I wanted to write.
I had the same issue and thought that error caused by php libpq exactly like you.
I tried million times to reinstall everything, changed OS but no result. Then I found some other setting that can cause the error. That was "Security Groups". You can find it under EC2 page menu. Please switch inbound source "everywhere" for "rds-launch-wizard" group then try to connect again.
It should work, if other every options are O.K.
Good luck.

Openstack dashboard gives error "Error: Unable to retrieve usage information"

I installed OpenStack on an ec2 instance running Ubuntu 14.04 LTS via devstack. When I login into the dashboard I get an error "Error: Unable to retrieve usage information"
When I installed it and logged in for the first time, everything was working fine. But after I stopped my ec2 instance and restarted, I am facing this problem.
What might be causing this error?
I used the stable juno version of devstack.
And the AMI for my ec2 instance is Ubuntu Server 14.04 LTS (HVM), SSD Volume Type.
Does restarting the instance might have caused some problem?
cd to devstack and execute ./rejoin-stack
That solved it. I was trying to reboot nova and other services individually.
But since the installation was done using devstack, you need to run the ./rejoin-stack script.
In addition to [akshay1188] answer, you can re-stack your system. Sometimes, rejoin-stack does not work as expected. In that case, you can unstack (unstack.sh) it and stack (stack.sh) again. *This may take much time.
Another observation of mine says this can be an issue with IP address of the system. Try to keep IP address same after reboot.