Windows 7 and Redhat machine communicating via secure Glassfish webservices - web-services

I have inherited a web-service that was written in Netbeans and runs on Glassfish (version 3) and used to talk between a Redhat server and a Windows7 machine.
These can be started manually, by using Netbeans (6.9) to start the Glassfish Server and deploy the Java script, and they communicate securely quite happily
Of course, manually deploying the system like this is far from ideal, so I have arranged for Glassfish to be deployed via the command line interface of Redhat;
Create a user
groupadd glassfish
useradd -s /bin/bash -d /home/glassfish -m -g glassfish glassfish
Copy from CD to glassfish directory
mkdir cdrom
chmod 777 /cdrom
mount /dev/cd0 /cdrom
cp glassfish-v3.zip /home/glassfish/glassfish-v3.zip
Login as the new user in a terminal window
sudo -i -u glassfish
Install GlassFish V3 using user glassfish
cd ~
unzip glassfish-v3.zip
rm glassfish-v3.zip
Exit the shell from step 2
Then the .war file (as in the Java script) is put into
glassfish/glassfishv3/glassfish/domains/domain1/autodeploy/CommandAndControlService.war
(As described in; http://download.oracle.com/docs/cd/E19798-01/821-1757/geyvr/index.html)
Then to run Glassfish I just log in as glassfish user, and launch glassfish
sudo -i -u glassfish
sudo glassfishv2/glassfish/bin/asadmin start-domain
(Which automatically deploys the .war file from earlier, as it’s in the “auto-deploy” directory)
Then for the Windows 7 machine…
Glassfishv3 is unpacked to
C:\glassfishv3
Then to start it I type;
C:\glassfishv3\glassfish\bin\asadmin.bat start-domain
All of the above works without any problems at all, the two machines chatter away happily over a non-secure connection.
The problem is that a secure connection is required, and this has been put in the script, and it works perfectly fine when glassfish is started through Netbeans on both machines.
However, when using the above procedure to start Glassfish, the secure link doesn’t work due to the certificates being “self signed” (the code uses “Mutual Certificates Security”).
I would say that this must be something in the code, but as it works fine when started through Netbeans I’d say that it was something to do with how I’m starting Glassfish, and deploying the .war file.
(I have tried started just one service automatically, and the other through Netbeans, but I get the same issue; the communication link started via the command line cannot connect due to self-signed certificates)
Any ideas?
Many Thanks

Related

VS Code integration with C++ development-environment inside Docker

I would like to run VSCode on my host machine, but (using its features / extensions) fire up tools from within the dev-env living inside my Docker container.
I have set up a docker image as a development environment for C++. Let's call it dev-env.
It is linux-based and contains required libraries, crosscompilation toolchains and various tools we use for building and testing our software (cmake, ninja, cppcheck, clang-tidy etc.)
I have a GIT repository on a host machine, which I mount inside a docker.
So my usual workflow would be to run docker:
host$
host$ docker run -v path/to/my/codebase/on/host:path/inside/docker -h dev-env --rm -it image_name bash
docker#
docker# cd build; cmake ..
etc...
And as such, I can build, test and run my tools inside my unified development environment inside the docker.
Now, the goal is to take it out of the terminal to the world of IDE.
I happen to use VS Code.
On host machine, I open my codebase folder in VSCode. Since it's mapped inside the docker, any changes I make locally will be available inside dev-env as well.
But if I now run anything from VSCode (CMake configure, build, etc.) it will of course call the tools from within my host machine - which of course will not work, and is not what I want.
With tasks defined in tasks.json I could probably manage with having them run something like docker exec CONTAINER my_command
It gets more complicated with extensions:
What I would like is to have the e.g. VSCode CMake Tools extension configured in such a way, that when I run Cmake Configure (in a VSCode running on my host machine), it will actually run cmake commands from within Docker container, using cmake installed inside Docker, not from my host machine.
Temporary solution: Forwarding display through X / VNC
So Installing VSCode inside the Docker, running x/vnc server inside the Docker, exposing port and connecting to it from the host machine.
Yes, it is possible, I have it running here. But it has many limitations and problems, of which the most painful is the lag/delay.
This is bad solution in general, so I would strongly push for avoiding this.
Another solution that I can think about:
VSCode instance running as a server inside the docker.
VSCode instance on your host connecting to the server instance.
You do all the work inside your host VSCode, but anytime you run a command, it is executed by a server instance, which runs everything inside Docker.
I guess this would require support from VSCode (or maybe an extension).
VSCode Live Share extension is not made exactly for that, but it's functionalities might do the job. I have not tested it yet.

spring boot 2 on VPS can't be accessed

I have an App that made using Spring Boot 2.
I use Undertow as my embedded web server.
My VPS OS is Ubuntu 14.04 lts, and Java 8 has already installed off course along side with Maven.
After my App.jar has successfully generated by mvn clean install then i move fat jar to my VPS.
when i run java -jar App.jar it perfectly works.
but when i access using IP there is nothing to show from browser.
This issue has been solved already, it was just missing some configuration on server side :)

Google Material Design Components on Ubuntu Server on Google Cloud

I cannot get Material Design Components to run on my virtual server. I have tried following their "quick start" page and their Material basics (Web 101) course to no avail. I am able to execute most of the steps in either tutorial, but I cannot see the JavaScript apply to the page. What am I doing wrong? I will detail my process below so that someone can hopefully spot my mistake.
First I create a VM instance on the Google Cloud Platform. It is a Ubuntu 18.04 LTS image with 1 CPU, 3.75 GB memory, and HTTP/HTTPS traffic allowed on the firewall.
Then I install Node.js and NPM on the machine.
sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm
Then I clone the codelab from GitHub. (following Web 101 in this example)
git clone https://github.com/material-components/material-components-web-codelabs
...and navigate to the pertinent directory.
cd material-components-web-codelabs/mdc-101/starter
In that directory, I install NPM.
npm install
The install works just fine, save for one optional dependency called "/chokidar/fsevents" which is apparently for Mac OS X anyways.
From the same directory, I start NPM.
npm start
At this point, the tutorial says I should be able to reach the site. It says to navigate to http://localhost:8080/. Since I am installing this on a remote, cloud server, I replace "localhost" with the server's external IP. I invariably get a timeout error from the browser.
Ensure that the port 8080 is open and listening inside the VM instance by running telnet, nmap, or netstat commands.
$ telnet localhost 8080
$ nmap <external-ip-vm-instance>
$ netstat -plant
If it is not listening, then this means that the application was not installed correctly.
Look at the Firewall Rule in the GCP to make sure that the VM instance allows ingress traffic to the port 8080.
Since you are running Ubuntu, make sure that the default Ubuntu firewall did not block the port 8080. If so, you have to allow access to the port 8080 by running the following command:
$ sudo ufw allow 8080/tcp

Do I have to leave the terminal open while using the Parse Server?

I have an AWS EC2 Instance running Ubuntu.
I've installed Parse Server on it and MongoDB. I noticed that whenever I close the terminal on my laptop, my android app cannot reach the server.
So my question is if I close the Java terminal window, leave the instance running on AWS, and still make usage on my Parse Server?
I solved it using the nohup command:
$ nohup parse-server --appId APP_ID --masterKey MASTER_KEY --databaseURI DATABASE_URI &

Jenkins can't copy files to windows remote host

I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).