I have a t2.nano instance that often reboots several times a day, as shown in the last reboot log:
reboot system boot 3.13.0-74-generi Tue Sep 12 17:26 - 19:15 (01:49)
reboot system boot 3.13.0-74-generi Tue Sep 12 13:58 - 19:15 (05:17)
reboot system boot 3.13.0-74-generi Tue Sep 12 11:13 - 19:15 (08:02)
reboot system boot 3.13.0-74-generi Tue Sep 12 00:48 - 19:15 (18:27)
reboot system boot 3.13.0-74-generi Fri Sep 1 23:48 - 19:15 (10+19:27)
As you can see, the server was up and running for 10 days, until it randomly reboots. It then reboots a total of 4 times over the next few hours.
There is nothing in /var/log/syslog at the time of reboot. Initially the instance is running a web server, but after the first reboot, the web server is not configured to start back up automatically. Therefore, nothing is running on my server, yet the instance still reboots several more times.
What's going on here? Is it likely that I'm being hacked or there's a problem with Amazon's servers?
Reboots look to be taking place at 19:15
Do you have any scripts or cronjobs running that could be playing a part in it?
Try This
https://status.aws.amazon.com/
Reboots should be expected, but no more frequently than you'd expect them with commodity hardware
Related
so this is the problem I have installed open jdk 8 for jenkins. jenkins is insalled and running given
● jenkins.service - LSB: Start Jenkins at boot time
Loaded: loaded (/etc/init.d/jenkins; generated)
Active: active (exited) since Thu 2021-10-21 19:22:55 UTC; 20min ago
Docs: man:systemd-sysv-generator(8)
Process: 437 ExecStart=/etc/init.d/jenkins start (code=exited, status=0/SUCCESS)
Oct 21 19:22:52 ip-172-31-30-187 systemd[1]: Starting LSB: Start Jenkins at boot time...
Oct 21 19:22:53 ip-172-31-30-187 jenkins[437]: Correct java version found
Oct 21 19:22:53 ip-172-31-30-187 jenkins[437]: * Starting Jenkins Automation Server jenkins
Oct 21 19:22:54 ip-172-31-30-187 su[619]: (to jenkins) root on none
Oct 21 19:22:54 ip-172-31-30-187 su[619]: pam_unix(su-l:session): session opened for user jenkins by (u>
Oct 21 19:22:54 ip-172-31-30-187 su[619]: pam_unix(su-l:session): session closed for user jenkins
Oct 21 19:22:55 ip-172-31-30-187 jenkins[437]: ...done.
Oct 21 19:22:55 ip-172-31-30-187 systemd[1]: Started LSB: Start Jenkins at boot time.
however, using serverip:8080 brings up nothing
used this tutorial https://www.youtube.com/watch?v=B6K1IF-489M&t=36s
port 8080 is also added to security group
this problem was not solved but making a fresh ec2 instance and installing Jenkins by following that tutorial did the trick
I have a hyperledger fabric network running on a single AWS instance using the default byfn script.
ERROR: Orderer, cli, CA docker containers show "Up" status. Peers show "Exited" status.
Error occurs when:
Byfn network is running, machine is rebooted (not in my control but because of some external reason).
Network is left running overnight without shutting the machine. Shows same status next morning.
Error shown:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0523a7b1730 hyperledger/fabric-tools:latest "/bin/bash" 23 seconds ago Up 21 seconds cli
bfab227eb4df hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 23 seconds ago peer1.org1.example.com
6fd7e818fab3 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 19 seconds ago peer1.org2.example.com
1287b6d93a23 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 22 seconds ago peer0.org2.example.com
2684fc905258 hyperledger/fabric-orderer:latest "orderer" 28 seconds ago Up 26 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
93d33b51d352 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 25 seconds ago peer0.org1.example.com
Attaching docker log: https://hastebin.com/ahuyihubup.cs
Only the peers fail to start up.
Steps I have tried to solve the issue:
docker start $(docker ps -aq) or manually, starting individual peers.
byfn down, generate and then up again. Shows the same result as above.
Rolled back to previous versions of fabric binaries. Same result on 1.1, 1.2 and 1.4. In older binaries, error is not repeated if network is left running overnight but repeats when machine is restarted.
Used older docker images such as 1.1 and 1.2.
Tried starting up only one peer, orderer and cli.
Changed network name and domain name.
Uninstalled docker, docker-compose and reinstalled.
Changed port numbers of all nodes.
Tried restarting without mounting any volumes.
The only thing that works is reformatting the AWS instance and reinstalling everything from scratch. Also, I am NOT using AWS blockchain template.
Any help would be appreciated. I have been stuck on this issue for a month now.
Error resolved by adding following lines to peer-base.yaml:
GODEBUG=netdns=go
dns_search: .
Thanks to #gari-singh for the answer:
https://stackoverflow.com/a/49649678/5248781
I'm a student from korea
first, i'm sorry about my low level english :)
I'm make a web service using AWS + nginx + django
I connect to AWS instance(ubuntu) using SSH protocol
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-74-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Sat Apr 30 07:03:51 UTC 2016
System load: 0.0 Processes: 105
Usage of /: 23.8% of 7.74GB Users logged in: 0
Memory usage: 14% IP address for eth0: 172.31.17.137
Swap usage: 0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
21 packages can be updated.
17 updates are security updates.
Last login: Sat Apr 30 07:03:52 2016 from 210.103.124.253
pyenv-virtualenv: no virtualenv has been activated.
and
manage.py runserver --settings=abc.settings.production
So everyone can access my web service!
but.... after 30miniute
the SSL connection is broken itself....
export this message
packet_write_wait: Connection to 52.69.xxx.xxx: Broken pipe
and nobody can't access my web service...
so... my web site can't access when my computer was power off, none SSL connection...
I want everyone can access my web service 24/7
please give me a method thank you :)
When you want to run a command that continues after your current shell terminates, you should use the nohup command to launch it.
That causes the process to be detached from its initial parent shell so it is not killed when the parent terminates.
I am using Django (Pyhton) framework deployed on AWS EC2 Ubuntu instance and sending email using BOTO and AWS SES service.
Earlier my script used to work.
But since few days I have encountered an error:
BotoServerError at /contact_us/
BotoServerError: 400 Bad Request
<ErrorResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
<Error>
<Type>Sender</Type>
<Code>RequestExpired</Code>
<Message>Request timestamp: Wed, 16 Mar 2016 16:57:21 GMT expired. It must be within 300 secs/ of server time.</Message>
</Error>
<RequestId>368a4b97-eb97-11e5-bf2d-8ff0675b134d</RequestId>
</ErrorResponse>
Exception Location: /usr/local/lib/python2.7/dist-packages/boto/ses/connection.py
in _handle_error, line 177
Server time: Wed, 16 Mar 2016 16:57:21 +0000
The SES is working on UTC and I have changed the time of EC2 to UTC as well.
Help me how to solve this issue.
Request timestamp: Wed, 16 Mar 2016 16:57:21 GMT expired. It
must be within 300 secs/ of server time.
Since you said it is not working for few days, it is most likely due to recent daylight savings time. And it is likely you are not running ntp to sync your clock.
Try this: sudo ntpdate pool.ntp.org
which will sync your system clock. If you want to make sure the time sync happens periodically, then start the NTP daemon:
sudo service ntp stop
sudo ntpdate -s pool.ntp.org
sudo service ntp start
Had a weird error trying to start cntlmd on Centos 7.1.
systemctl start cntlmd` results in the following in the logs (and yes, becomming is exactly how it's spelt in the logs :)):
systemd: Started SYSV: Cntlm is meant to be given your proxy address and becoming
Weird thing is:
that it did run initially after installation.
The exact same config works perfectly on another machine (provisioned with Chef so 100% same config).
If I run it in the foreground it works but through systemd, not.
To "fix" it, I had to manually remove and reinstall, whereupon it worked again.
Anybody seen this error (Google reveals nothing) and know what's going on?
I realised that the /var/run/cntlm directory seemed to be "removed" after every boot. Turns out the /var/run/cntlm directory is never created by systemd-tmpfiles on boot (thanks to this SO answer), which then resulted in:
Feb 29 06:13:04 node01 cntlm: Using following NTLM hashes: NTLMv2(1) NT(0) LM(0)
Feb 29 06:13:04 node01 cntlm[10540]: Daemon ready
Feb 29 06:13:04 node01 cntlm[10540]: Changing uid:gid to 996:995 - Success
Feb 29 06:13:04 node01 cntlm[10540]: Error creating a new PID file
because cntlm couldn't write it's pid file because /var/run/cntlm didn't exist.
So to get systemd-tmpfiles to create the /var/run/cntlm directory on boot you need to add the following file in /usr/lib/tmpfiles.d/cntlm.conf:
d /run/cntlm 700 cntlm cntlm
Reboot and Bob's your uncle.