"cf ssh" into java buildpack app - how to run script that uses java? - cloud-foundry

I have deployed Keycloak-Server (as a Wildfly Swarm fraction) to the Swisscom Cloud Foundry environment (with a Java build pack).
When I try to access the Keycloak admin console I get the following error:
"You need local access to create the initial admin user. Open http://localhost:8080/auth or use the add-user-keycloak script."
How could I resolve this?
Can I somehow open an ssh tunnel to my Java buildpack app in order to access it with http://localhost:8080?
I also tried to "cf login" and "cf ssh" into my app. I created the "add-user-kecloak.sh" by copy/pasting it. When I try to execute it I get the error "java command not found"?
This is the script: https://github.com/keycloak/keycloak/blob/master/distribution/feature-packs/server-feature-pack/src/main/resources/content/bin/add-user-keycloak.sh

You can use cf ssh to open an ssh tunnel into your container and access a URL within: cf ssh your-app -N -L 8080:localhost:8080.
This will listen to port 8080 on your machine, and forward any requests to it to port 8080 on your app container. So you should be able to point your browser to http://localhost:8080/auth to get to the console.
Running the script may be a bit more complicated; at least the Java Buildpack has not standardized where it stores the java executable and it's not added to the PATH when you cf ssh into the container, so you'd first need to find it.
I have not used Keycloak myself so my answer is limited to how to tunnel into your app container to access a local console.
Either way, note that if this admin user is saved to local disk, and not to some external storage, next time the app is restaged (either by you or by the system to apply patches to its rootfs), you may need to go through this again.

Related

How to access Django app in docker container from another machine?

I am pretty new to Docker and Django. So what i did was, putty to a linux server and created a folder in the root directory and then using django-admin startproject I started a new project.
Now, since I am using putty for ssh terminal access, I will not be able to access a browser in the linux machine and then ping 127.0.0.1:8000 to see whether "congratulations!" screen by Django is visible or not.
So I assumed that the server might be running after runserver command. Then using docker I prepared a container in the linux machine where I have exposed port 9000. Also I cannot access this container since I cannot access the browser in the linux machine. Now, I have three questions below:
1.) How to access this docker container (inside the linux machine) using my windows machine? By this I mean, if I open up lets say google chrome browser in the windows machine, and enter some url:port, will I be able to see the "congratulations!" screen on my browser on windows?
2.) I am pretty confused with how this container network port and ip works (I mean how does host or any other pc access this docker container) I tried looking up on many documentation and youtube videos but I am very much confused. Because I know to make your website/app accessible to the external world we need domain name hosted on some cloud for which we need to pay, but how can docker do this for free? Might sound like a lame one, but please help me understand.
3.) How should my docker run command look like for accessing from my windows machine?
My dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 9000
CMD python manage.py runserver 0.0.0.0:9000
I am using the following command to build:
docker build -t myproj .
Please help clarifying my questions guy. I'll be forever grateful :)
Thanks all!
When you run the container, you need a docker run -p option:
docker run -p 12345:9000 myproj
The second port number must match the port number the actual server process is listening on (in your case, the port argument to ./manage.py runserver). The first port number can be any port number that's not otherwise in use on the host system.
Then (up to networking and firewall constraints) another system can reach the container by using the host's IP address and the first port number; http://my-dev-system.internal.example.com:12345. If you're calling from the host directly then these two systems are the same and in this special case you can use http://localhost:12345.
As an implementation detail the container happens to have its own IP address but you never need to look it up or use it. (Among other problems, it is not accessible from other machines.) From other systems' points of view a Docker-based process is indistinguishable from the process running directly on the host. Docker doesn't address the problems of needing somewhere to host the application, coming up with a DNS name for the host, or other similar concerns.
Try running it without EXPOSE 9000, when you are exposing port it's visible only inside of a container and not to the outer world. After doing so, go to a browser and navigate to <server_ip>:9000 and you will probably see the message.

Trying to deploy Django on SiteGround

I am trying to deploy a Django app on Siteground through SSH. I have transferred all the files through Filezilla. Everything is setup.
I have developed several apps on AWS using ubuntu. But in siteground Fedora OS is provided in SSH, I am not familiar with that much. I can't have superuser privileges.
Running my Django server on port 8000:
python manage.py runserver 0.0.0.0:8000
Host name is already added in ALLOWED_HOST of setting.py:
ALLOWED_HOSTS = ["himeshp7.sg-host.com","*"]
The server is running in SSH, but I am unable to open my web app on the browser. In AWS we get the option to enable ports in Security Groups, but I couldn't find anything like that on Siteground, I also talked with customer care but they are telling to upgrade, I still doubt that if it will work or not after that as I couldn't find anything proper for deploying Django on Siteground.
You need to add your server ip address to ALLOWED_HOSTS and do python manage.py runserver <your_server_ip_address>:8000 to simply run your app in Debug mode. (Replace <your_server_ip_address>). You can then access your app over port 8000
To host your app in production you need to do further more than running the app through command like installing WSGI HTTP Server, configuring to run your app on port 80 or some other port, etc.
Amazon AWS has UI for most of the things so that you could easily enable ports and such other things. This is not the case of other hosting providers.
Unless you don't have the sudo privileges there are no options to run Django app in shared hosting. Hosting providers that gives SSH/terminal access for shared hosting will not give sudo privileges for security reasons. You should be having a VPS/Dedicated account for that which costs higher to have higher control over your server.
Why do I need sudo privileges ?
You may need to install additional packages/dependencies.
To add additional apache/nginx config for your domain. etc
Otherwise you can go for hosting providers where they provide additonal "Setup Python App" in "Software" section in CPanel for their Shared Hosting Plans. You don't need to worry about server configuration.
There are many providers that gives this option in their Shared Hosting. Two of such providers that I know of:
namecheap refer
a2hosting refer
Based on the exp that I had on deploying python app on Hostgator VPS link.

Google Compute Engine - can't ssh to it after debian upgrade

I upgraded my Debian instance from wheezy to jessie. Everything went well. I rebooted the system and couldn't ssh to it anymore from the compute engine instance page. I noticed the system did reboot, with a different external IP address. I'm able to get to a web server I have running on the virtual machine, so I know everything upgraded and rebooted properly. Google assigned a new external IP to it and I can't login anymore.
the fact that sshd is no longer running is very unlikely, so here is my personal debug steps when I can't reach an instance on Google Cloud:
Check twice you ssh parameters (ssh keys, login user, ip address)
Activate ssh debug logs (-v) when you try to connect
Try using Cloud Shell
Check firewall rules in GCP and on your local network
Check the boot logs on the instance serial port
Re-send you SSH key in GCP > Compute > Metadata (bugs occurs sometime with the google user agent on your machine)
After that, you normally know how to connect to your instance or you know what's wrong with sshd server.
You can review the serial-port logs of the affected instance for possible clue on the issue. If you have a snapshot of your instance disk, you can create a new VM. As per the issue, is possible that recent changes may have affected the instance boot sequence and the sshd_config file.
To troubleshoot this, you can enable interactive access, connect to the instance through the serial console and enter the serial port access information to access the disk, review the ssh config files$ sudo vi /etc/ssh/sshd_config and $ sudo vi /etc/ssh/ssh_config.
If you don’t have a root password for the serial console, you could use a startup script to add it to your instance as follows:
Go to the VM instances page in Google Cloud Platform console.
Click on the instance for which you want to add a startup script.
Click the Edit button at the top of the page.
Click on ‘Enable connecting to serial ports’
Under Custom metadata, click Add item.
Set 'Key' to 'startup-script' and set 'Value' to this script:
#! /bin/bash
useradd -G sudo USERNAME
echo 'USERNAME:PASSWORD' | chpasswd
Example:
#! /bin/bash
useradd -G sudo test1
echo 'test1:pass#100' | chpasswd
Click Save and then click RESET at the top of the page. You might need to wait for some time for the instance to reboot.
Click on 'Connect to serial port' on the page.
In the new window, you might need to wait a bit and press the Enter of your keyboard once; then, you should see the login prompt.
10.. Login using the USERNAME and PASSWORD you provided.
Example:
Username: test1 AND Password: pass#100
You can also share a sanitized version of the serial port logs, for more information on what may be happening on the instance. This is not due to a change in IP address, however the serial port logs should give us more insight.

VSTS Task: Window machine file copy: system error 53

I'm trying to make a release from VSTS to a VM(running on AWS) that is running an IIS. For that I use three tasks.
Windows Machine File Copy
Manage IIS App
Deploy IIS App
Before the release I'm running a build pipeline that that gives me an artifact containing the web app (webapp.zip).
When I manually put it on the server I can run step 2 and 3 of my release and the application works. The problem I have is that I don't get the Windows Machine File Copy to work. It always throws an exception giving a 'System Error 53: The network path was not found'. Of course the machines are not domain joined, because I'm running my release on VSTS and need the files on a AWS VM. I tried to open port 445 (for file sharing) and made sure the user has rights for the destination path on the target machine.
So my question is: How can I actually move the files from VSTS to the AWS VM if the two machines are not joined.
Using FTP Upload or cURL upload step/task instead.
Regarding how to create FTP site, you can refer to this article: Creating a New FTP Site in IIS 7.
Disclaimer: this answer merely explains how to fulfill the requirements to use tasks of Windows Machine File Copy and Manage/Deploy IIS tasks.
Please always be concerned about security of your target hosts, its hardening and security assessment is absolutely necessary.
As noted in comments, you need to protect the channel of deployment from the outside world, here an high level example:
Answer:
in order to use the Windows Machine File Copy task you need to:
on the target machine (the one running IIS) enable File and Printer Sharing running the following command from administrative command prompt:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes
assure that on the target machine PowerShell 4 or more recent is installed; the following executed from a PS command prompt prints the version installed on the local machine:
PS> $PSVersionTable.PSVersion
To get PowerShell 5 you could for example install WMF 5
;
on the target machine you must have installed .NET Framework 4.5 or more recent;
For the other two tasks (Manage/Deploy IIS Task), both require you to enable a WinRM HTTPS listener on the target machine. For development deployment scenario you could follow these steps:
download the ConfigureWinRM.ps1 PowerShell script at from the officaial VSTS Tasks GitHub repository;
enable from an Administrative PowerShel command prompt the RemoteSigned PowerShell execution policy:
PS> Set-ExecutionPolicy RemoteSigned
run the script with the following arguments:
PS> ConfigureWinRM.ps1 FQDN https
Note that FQDN is the complete domain name of your machine as it is reached by the VSTS task, e.g. myhostname.domain.example .
Note also that this script downloads two executables (makecert.exe and winrmconf.cmd) from Internet, so this machine must have Internet connection. Otherwise just download those two files, place them sibling to the script, comment out from the script the Download-Files invocation.
Now you have enabled a WinRM HTTPS listener with a self signed certificate. Remember to use the "Test Certificate" option (which ironically means to not test the certificate, better name would have been "Skip CA Check") for those two tasks.
In production deployment scenario you may want to use instead a certificate which is properly signed.
Windows File Copy is designed to work on the same network and enabling it on the internet would open your server for hacking. It's designed for internal networks. FTP would also result in a significant security risk unless managed properly.
The easiest way to move forward would be to run an Agent on the VM in AWS that you want to release to. The agent will then download the artifacts to the AWS VM and run whatever tasks you need to install.
This allows you to run tasks on the local machine without opening it up to security risks.
If you had multiple machines that you need to manage in AWS you can easily create a local network that will allow your single agent to use Windows File Copy to push files to multiple VM's without risk.

Execute commands in application deployed in Cloudfoundry

I have deployed a Java Spring-MVC based application on CloudFoundry V2. My application needs to access another server by calling its webservices over https protocol. This needs that certificate should be trusted by jvm.
So i need to execute command in jvm to install SSL certificate. But so far I don't see a way to get console of an application installed in CloudFoundry.
One option would be to specify a custom start command for the application, and run the cert import command before the actual application start command.
Another option would be to extend the Java buildpack and either install the cert during the JVM setup (possibly by creating an additional "framework" component") or to have the buildpack set up the cert import in the start command automatically. See https://github.com/cloudfoundry/java-buildpack#configuration-and-extension for details on extending the buildpack.