Jenkins AccessDeniedException upon trying to enable security on Jenkins on an EC2 - amazon-web-services

Last night when I was trying to set up Jenkins, from the jenkins.war file, I was trying to enable security, via username/password for it. I clicked the "Disable read access to anonymous" checkbox, and right after doing that, I got this screen , even after logging in with the new credentials I just created. I have tried the following (which has resulted in this screen still):
removing anything on the EC2 that had to deal with Jenkins (sudo find / -name "*jenkins*" followed by sudo rm [-rf] on anything that popped up in the results)
re-visiting that site after doing the above option
re-installing the WAR file
installing Jenkins as a service
attempting login again
Is there a way out of this?

I should have checked the processes and killed the one that was Jenkins. The process somehow outlived its JAR/WAR executable!

Related

Unable to connect google compute engine, getting permission denied error

I have accidentally changed permission of the .ssh folder to 600 and now I am not able to log in to the GCP server through SSH as it's giving me permission denied error.
**Connection Failed**
You cannot connect to the VM instance because of an unexpected error. Wait a few moments and then try again.
I tried multiple options like, ssh troubleshooting instance, enabling serial console, ssh private key login.
Thanks you in advance.
One of the simple ways to fix this would be to use a startup script. In this script just execute chmod 700 /path/to/your/.ssh.
The startup scripts are executed with root privileges, so it should be able to fix your problem with .ssh folder permissions.
So, what you need to do:
Set the startup script.
Restart the VM.
Wait a minute or two to make sure the script got executed.
Remove the startup script from the machine. (no need to restart again)
Thank you guys for all your support, my problem got solved by follwing below document:
Serial Console with local password using a startup script

Unable to sudo to Deep Learning Image

I installed the latest Google Cloud Deep Learning VM Image today, after VM was launched, I was able to do sudo -i successfully via SSH web.
Once I login, I start my Tensorflow model training running in background (Using &). Few hours later I'm unable to login as root.
I get the following message:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for my_username:
I tried:
sudo -i
su sudo -i
su root
I was able to replicate the issue. Any suggestions?
This issue was caused due to an internal Google side and removes the user from “Google-sudoers” group. For all affected instances, I suggest following the below workaround until the permanent fix has been rolled out.
Use a different username:
If using browser SSH window, click on the settings icon (top right), and click change Linux name in the drop down.
Using the SDK
$ gcloud compute ssh newusername#instance
Enable OS Login on the instance (set "enable-oslogin=True" in metadata) and per this article
You can track the permanent fix by following the Public Issue tracker.
The original answer:
Maybe the solution will be to add a SSH Key for Google Cloud Console and log in with another SSH client.
Additional answer:
I do not know why, but sometime the user suddenly stopped being a member of the google-sudoers group...
Then it's enough add your user to this group by some other user with administrator privileges to this group:
# usermod -G google-sudoers your_user_name
of course, if there is such a user...

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.

Google cloud compute startup script ignored with no logging

I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.

AWS Elastic Beanstalk deploy not working

I'm new to AWS Eleastic Beanstalk. I'm trying to deploy a new application through awsebcli and I'm getting the following error:
"Error: OSError :: [WinError 145] The directory is not empty '.elasticbeanstalk\app_versions'
I was able to init the eb application. I am running the command line under administrator privileges.
Please Help.
I've just ran into the same issue.
"eb deploy" temporarily creates a subfolder "app_versions" in the ".elasticbeanstalk" folder at the root of the project that contains the zip file to be uploaded to S3. Once done, the folder gets deleted. Check whether any software on your computer might be responsible for preventing this.
The cause for me was a files-syncing software (Dropbox-like) that was watching the entire project for file/folder changes.
I'm developing a Django Application and I get this message -
Uploading app to S3. This may take a while. Upload Complete.
How to fix every time it happens
Disable/Pause file syncing applications, such as: Google Drive Sync/OneDrive/DropBox
Delete the (If exists) mysite.elasticbeanstalk\app_versions , don't worry, it's created each time you type "eb deploy"
Open Command prompt in the folder mysite\ and run the command
pip freeze > requirements.txt
Navigate mysite\ and run again eb deploy should work
The message I get when it's not working
The message I get when it's working