Issue : Not able to access Jenkins portal after EC-2 instance restart.
Debug steps taken: restarted jenkins service, added jdk path in jenkins.
Getting Output as,
This site can’t be reached
● jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; enabled;
vendor preset: disabled)
Active: active (running) since Wed 2022-05-11 05:15:30 UTC; 28min ago
Main PID: 2073 (java)
Tasks: 33 (limit: 4690)
Memory: 311.1M
CGroup: /system.slice/jenkins.service
└─2073 /usr/bin/java -Djava.awt.headless=true -jar
/usr/share/java/jenkins.war --webroot=/var/cache/jenkins/war --
httpPort=8080
May 11 05:15:30 ip jenkins[2073]: 2022-05-11 05:15:30.671+0000 [id=41]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished
Download >
May 11 05:15:30 ip jenkins[2073]: 2022-05-11 05:15:30.857+0000 [id=26]
INFO jenkins.InitReactorRunner$1#onAttained: Completed
initialization
May 11 05:15:30 ip jenkins[2073]: 2022-05-11 05:15:30.913+0000 [id=20]
INFO hudson.lifecycle.Lifecycle#onReady: Jenkins is fully up and
runni>
May 11 05:15:30 ip systemd[1]: Started Jenkins Continuous Integration
Server.
May 11 05:16:02 ip jenkins[2073]: 2022-05-11 05:16:02.724+0000 [id=56]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2
alive >
May 11 05:16:02 ip jenkins[2073]: 2022-05-11 05:16:02.725+0000 [id=56]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2
alive>
May 11 05:26:02 ip jenkins[2073]: 2022-05-11 05:26:02.724+0000 [id=57]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2
alive >
May 11 05:26:02 ip jenkins[2073]: 2022-05-11 05:26:02.725+0000 [id=57]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2
alive>
May 11 05:36:02 ip jenkins[2073]: 2022-05-11 05:36:02.724+0000 [id=58]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2
alive >
May 11 05:36:02 ip jenkins[2073]: 2022-05-11 05:36:02.725+0000 [id=58]
INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2
alive>
Related
I'm having some trouble in trying to change my jenkins port as I was hoping to use port 8080 for a different service. I've tried this so far:
Currently running on amazon linux:
Jenkins version: Jenkins 2.332.1
I've tried to edit the config file: /etc/sysconfig/jenkins to:
JENKINS_PORT="7777"
After I restart jenkins however, the port does not change:
● jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/jenkins.service.d
└─override.conf
Active: active (running) since Tue 2022-04-05 15:52:24 UTC; 1min 33s ago
Main PID: 1017 (java)
Tasks: 36
Memory: 500.6M
CGroup: /system.slice/jenkins.service
└─1017 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080
Apr 05 15:53:38 ip-172-0-2-240.eu-west-1.compute.internal jenkins[1017]: at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
Apr 05 15:53:38 ip-172-0-2-240.eu-west-1.compute.internal jenkins[1017]: at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
What am i missing here?
Check the service starting command
/usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080
Edit the service by changing --httpPort=8080 to desired port then call daemon-reload and restart the service
Also, ensure the Security Group is configured for that port
There is a different fix in this link https://cdmana.com/2022/03/202203242138366513.html which suggest editing JENKINS_PORT in /usr/lib/systemd/system/jenkins.service the calling service jenkins start
I have a Google cloud VM instance with Debian OS. I have hosted Wordpress sites. After upgrading OS version all was working fine and I was able to connect via SSH using 'Open SSH in browser' option.
Now I try to connect my VM instance using 'Open SSH in browser' it just keep retrying. I checked the serial console output but there is no error message. Please refer below
However, I am able to connect via FTP using same key but when I try to connect via SSH at that time facing the issue. I checked for port 22 for that instance and project and it's open
Below is the Last few lines of serial console log after I restarted the VM,
Dec 6 09:01:20 localhost sendmail[383]: Starting Mail Transport Agent (MTA): sendmail.
Dec 6 09:01:20 localhost systemd[1]: Started LSB: powerful, efficient, and scalable Mail Transport Agent.
Dec 6 09:01:21 localhost systemd[1]: Started MariaDB 10.3.31 database server.
Dec 6 09:01:21 localhost systemd[1]: Reached target Multi-User System.
Dec 6 09:01:21 localhost systemd[1]: Reached target Graphical InterfacDec 6 09:01:21 localhost systemd[1]: Startup finished in 4.063s (kernel) + 9.852s (userspace) = 13.915s.
Dec 6 09:01:21 localhost /etc/mysql/debian-start[567]: Upgrading MySQL tables if necessary.
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: Looking for 'mysql' as: /usr/bin/mysql
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: Version check failed. Got the following error when calling the 'mysql' command line client
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: NO)
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: FATAL ERROR: Upgrade failed
Dec 6 09:01:21 localhost /etc/mysql/debian-start[580]: Checking for insecure root accounts.
Dec 6 09:01:21 localhost debian-start[564]: ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: NO)
Debian GNU/Linux 10 localhost ttyS0
localhost login: Dec 6 09:01:28 localhost systemd[1]: Stopping User Manager for UID 110...
Dec 6 09:01:28 localhost systemd[497]: Stopped target Default.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Basic System.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Timers.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Paths.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Sockets.
Dec 6 09:01:28 localhost systemd[497]: gpg-agent-browser.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 6 09:01:28 localhost systemd[497]: dirmngr.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG network certificate management daemon.
Dec 6 09:01:28 localhost systemd[497]: gpg-agent-ssh.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Dec 6 09:01:28 localhost systemd[497]: gpg-agent.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent and passphrase cache.
Dec 6 09:01:28 localhost systemd[497]: gpg-agent-extra.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Dec 6 09:01:28 localhost systemd[497]: Reached target Shutdown.
Dec 6 09:01:28 localhost systemd[497]: systemd-exit.service: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Started Exit the Session.
Dec 6 09:01:28 localhost systemd[497]: Reached target Exit the Session.
Dec 6 09:01:28 localhost systemd[1]: user#110.service: Succeeded.
Dec 6 09:01:28 localhost systemd[1]: Stopped User Manager for UID 110.
Dec 6 09:01:28 localhost systemd[1]: Stopping User Runtime Directory /run/user/110...
Dec 6 09:01:28 localhost systemd[1]: run-user-110.mount: Succeeded.
Dec 6 09:01:28 localhost systemd[1]: user-runtime-dir#110.service: Succeeded.
Dec 6 09:01:28 localhost systemd[1]: Stopped User Runtime Directory /run/user/110.
Dec 6 09:01:28 localhost systemd[1]: Removed slice User Slice of UID 110.
Tried following solutions which I get from google search
Solution 1 : Using PuTTYGen & Putty
Generated key using PuttyGen and put the public key under meta data as well tried adding under instance. I have set enable-oslogin to FALSE.
But got the following error message.
Solution 2 : Using serial ports
When I tried to connect using diff serial ports it just stack at connection screen and I checked the console log for that serial port but it's blank.
Solution 3 : New Instance with disk image
Created image of current disk and then created new instance with that image. When I try to connect to that new instance then I am facing same issue.
Solution 4 : Use diff machine to setup CLI
I setup fresh Google Cloud CLI into a new machine and tried to connect but no success. Same error I faced.
Solution 5 : Increase Disk Space
Increased the disk space from 20GB to 35GB, but didn't work. Usually if there is disk space error the we get it into serial console log. But in my case there is no error message in serial console log.
Please help and let me know if any additional information is required.
Thanks
I have a VPS server on amazon's AWS Lightsail service.
I've been testing kdump using the following two commands (to trigger an automatic kernel crash):
# echo 1 > /proc/sys/kernel/sysrq
# echo c > /proc/sysrq-trigger
The problem is that the system crashed and rebooted, but there's no dump saved.
Here is a list of checking I've done:
[centos#server crash]$ systemctl status kdump
● kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2019-03-18 07:43:34 UTC; 5 days ago
Process: 4119 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS)
Main PID: 4119 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/kdump.service
Mar 18 07:43:32 ip-.ap-northeast-1.compute.internal systemd[1]: Starting Crash recovery kernel arming...
Mar 18 07:43:34 ip-.ap-northeast-1.compute.internal kdumpctl[4119]: kexec: loaded kdump kernel
Mar 18 07:43:34 ip-.ap-northeast-1.compute.internal kdumpctl[4119]: Starting kdump: [OK]
Mar 18 07:43:34 ip-.ap-northeast-1.compute.internal systemd[1]: Started Crash recovery kernel arming.
[centos#server crash]$ dmesg | grep Reserving
[ 0.000000] Reserving 256MB of memory at 368MB for crashkernel (System RAM: 2047MB)
[centos#server crash]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.1.3.el7.x86_64 root=UUID=f41e390f-835b-4223-a9bb-9b45984ddf8d ro console=tty0 crashkernel=256M console=ttyS0,115200
[centos#server crash]$ grep -v ^# /etc/kdump.conf
path /var/crash
core_collector makedumpfile -l --message-level 1 -d 31
default reboot
There's no log of the crash in the /var/log/messages indicating any error there might be. So I wonder what I might have missed. Or an AWS Lightsail VPS is not capable of saving a kdump at all...?
On one of my AWS ec2 instances running Ubuntu 16.04, I'm getting the following errors filled up in my /var/syslog.
Jul 17 18:11:21 Mysql-Slave systemd[1]: Stopped The CloudWatch Logs agent.
Jul 17 18:11:21 Mysql-Slave systemd[1]: Started The CloudWatch Logs agent.
Jul 17 18:11:26 Mysql-Slave systemd[1]: awslogs.service: Main process exited, code=exited, status=255/n/a
Jul 17 18:11:26 Mysql-Slave systemd[1]: awslogs.service: Unit entered failed state.
Jul 17 18:11:26 Mysql-Slave systemd[1]: awslogs.service: Failed with result 'exit-code'.
Jul 17 18:11:26 Mysql-Slave systemd[1]: awslogs.service: Service hold-off time over, scheduling restart.
Jul 17 18:11:26 Mysql-Slave systemd[1]: Stopped The CloudWatch Logs agent.
Jul 17 18:11:26 Mysql-Slave systemd[1]: Started The CloudWatch Logs agent.
Jul 17 18:11:32 Mysql-Slave systemd[1]: awslogs.service: Main process exited, code=exited, status=255/n/a
Jul 17 18:11:32 Mysql-Slave systemd[1]: awslogs.service: Unit entered failed state.
Jul 17 18:11:32 Mysql-Slave systemd[1]: awslogs.service: Failed with result 'exit-code'.
Jul 17 18:11:32 Mysql-Slave systemd[1]: awslogs.service: Service hold-off time over, scheduling restart.
Jul 17 18:11:32 Mysql-Slave systemd[1]: Stopped The CloudWatch Logs agent.
Jul 17 18:11:32 Mysql-Slave systemd[1]: Started The CloudWatch Logs agent.
The /var/log/awslogs.log contains these messages:
database is locked
2018-07-17 20:59:01,055 - cwlogs.push - INFO - 27074 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2018-07-17 20:59:01,055 - cwlogs.push - INFO - 27074 - MainThread - Using default logging configuration.
database is locked
2018-07-17 20:59:06,549 - cwlogs.push - INFO - 27104 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2018-07-17 20:59:06,549 - cwlogs.push - INFO - 27104 - MainThread - Using default logging configuration.
database is locked
2018-07-17 20:59:12,054 - cwlogs.push - INFO - 27110 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2018-07-17 20:59:12,054 - cwlogs.push - INFO - 27110 - MainThread - Using default logging configuration.
Any pointers in troubleshooting this will be of great help.
A similar issue was posted in the following link - https://forums.aws.amazon.com/thread.jspa?threadID=165134
I did the following:
a) Stopped the awslogs service
$ service awslogs stop ## Amazon Linux
OR
$ service awslogsd stop ## Amazon Linux 2
b) Deleted the agent-state file in /var/awslogs/state/ (I renamed it in my case)
$ mv agent-state agent-state.old ## Amazon Linux
OR
$ cd /var/lib/awslogs; mv agent-stat agent-stat.old ## Amazon Linux 2
c) Restarted the awslogs service
$ service awslogs start ## Amazon Linux
OR
$ sudo systemctl start awslogsd ## Amazon Linux 2
A new agent-state file was created as a result and the errors mentioned my post disappeared after this.
Please try the following commands based on your Linux version
sudo service awslogs start
If you are running Amazon Linux 2, try the below command
sudo systemctl start awslogsd
took me 2 hours to figure this out
In my case, I found duplicate entries for some properties in /etc/awslogs/awslogs.conf file.
(Not all were duplicates, as some of the properties were commented, and I uncommented them to set values.)
It didn't work. Then I scrolled till the bottom of the file.
I found following entries. Set the values to these properties and it worked.
[/var/log/messages]
datetime_format = %b %d %H:%M:%S
file = /home/ec2-user/application.log
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = MyProject
I´m using docker tools on windows.
create command was working perfectly last week and I managed to create a number of machines on Digital Ocean. Then I tried today with no success. I repeated the same command with different regions and I always get the same result:
λ docker-machine create -d digitalocean --digitalocean-access-token=MYTOKEN --digitalocean-region=ams2 vmname
Running pre-create checks...
Creating machine...
(fernu) Creating SSH key...
(fernu) Creating Digital Ocean droplet...
(fernu) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Error creating machine: Error running provisioning: ssh command error:
command : sudo systemctl -f start docker
err : exit status 1
output : Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
If I execute the suggested command:
root#fernu:~# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─10-machine.conf
Active: inactive (dead) (Result: exit-code) since Fri 2017-06-30 20:56:13 UTC; 8min ago
Docs: https://docs.docker.com
Process: 4943 ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=digitalocean (code=exited, status=1/FAILURE)
Main PID: 4943 (code=exited, status=1/FAILURE)
Jun 30 20:56:13 fernu systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Unit entered failed state.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jun 30 20:56:13 fernu systemd[1]: Stopped Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Start request repeated too quickly.
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Any help would be appreciated
Update
It´s working with ubuntu 14:
--digitalocean-image=ubuntu-14-04-x64 so it seams like a problem with the default image (ubuntu-16-04-x64)
This seems to be hitting a lot of people. TL;DR: There is a bug in docker-machine v0.12.0 and this issue can be resolved by upgrading.
Logging in to the DigitalOcean instance and running journalctl -xe provides more information:
-- Unit docker.service has begun starting up.
Jul 07 20:03:52 docker-sandbox docker[4930]: `docker daemon` is not supported on Linux. Please run `do
Jul 07 20:03:52 docker-sandbox systemd[1]: docker.service: Main process exited, code=exited, status=1/
Jul 07 20:03:52 docker-sandbox systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
The key here is docker daemon is not supported on Linux. A bug in docker-machine's version comparison code caused an incorrect systemd unit file to be produced (located at /etc/systemd/system/docker.service.d/10-machine.conf) on certain versions of Ubuntu.
A fix has been committed and a new release (v0.12.1) was made.
You can grab the latest release at: https://github.com/docker/machine/releases/tag/v0.12.1