I followed instructions at https://wso2.com/integration/install/docker/get-started/. When I try to access
https://localhost:9743/dashboard
as indicated, I get confronted right away with the HTTPS issue in edge, chrome and firefox. After entering login and password (admin, admin), I get a dialogbox as follows
I click on the link I get unable to connect in all browsers
I downloaded the pem certificate in Firefox, added it to trusted store but it did not help
Not sure what is needed to be done
You need to start the MI docker container as follows where it exposes port 9164.
docker run -it -p 8253:8253 -p 8290:8290 -p 9164:9164 docker.wso2.com/wso2mi:1.2.0
Then you can start the MI Dashboard container as below.
docker run -it -p 9743:9743 wso2/wso2mi-dashboard:1.2.0
When you are logging from the MI dashboard server, it calls the MI instance to authenticate the user.
Related
So I uploaded my Django API with Rest framework on AWS EC2 instance. However, I have to manually go to Putty and connect to my EC2 instance and turn API on whenever I want to use it by inputting python manage.py runserver 0.0.0.0:8000.
When I turn off my PC, putty closes and API cannot be accessed anymore on the ip address.
How do I keep my API on forever? Does turning it into https help? Or what can be done?
You can make it live always by following means,
connect your ec2 instance using ssh.
Then deploy your backend (django) on that instance and run it at any port.
Once run on your desired port, you can close the terminal, please don't press ctrl+c so that your django server does not stop. you can just cross the terminal. it will be now running.
You can also run the django server on tmux (its terminal inside terminal). here is tutorial on tmux.
https://linuxize.com/post/getting-started-with-tmux/
One other approach is, you can deploy the django using docker container.
I hope you will come over your problem.
Thanks.
Ok I finally solved this. So when you close putty or a ssh client session, the session goes offline. However, if you run the session via daemon, the session continues in the background even when you close your clients. The code is
$ nohup python ./manage.py runserver 0.0.0.0:8000 &
Of course you can use tmux or docker, as suggested by madi, but I think running this one code is much simpler.
You can use pm2.
Please install pm2.
And make a server.json file in the root directory of your django app to run your app.
{
apps:
[{
name: "appname",
script: "manage.py",
args: ["runserver", "0.0.0.0:8888"],
exec_mode: "fork",
instances: "1",
wait_ready: true,
autorestart: false,
max_restarts: 5,
interpreter : "python3"
}]
}
Then you can run this app with pm2 start server.json.
Your app will run on port 8888 .
First time trying to run stellar docker image in persistence mode and receiving this error after entering & confirming new password:
pq: password authentication failed for user "stellar"
docker cmd
docker run --rm -it -p "8000:8000" -v "/dev/stellar:/opt/stellar" --name stellar stellar/quickstart --testnet
I looked at trying to edit pg_hba.conf but I don't see the stellar user that has been configured.
Also, I verified the stellar-core.cfg has the correct db password as defined during setup.
I had exactly the same issue while I was using a password with special characters (too fancy for sed to process?). Then I recreated the '/opt/stellar' shared volume and used stellarpasswd as a New Postgresql Password. And it worked! :)
Another piece of info, maybe more important. I have also added user stellar to my host machine. When the default configurations get rsynced to '/opt/stellar', this might not have worked properly without that user on my host machine..
After running the ciphertool.bat or ciphertool.sh script in the bin directory of WSO2 Identity server, the next time the server is started up, you are presented with a prompt that asks you for the keystore and private key password used to configure the WSO2 secure vault. Example:
C:\Program Files\WSO2\Identity Server\5.7.0\bin>wso2server.bat --start
JAVA_HOME environment variable is set to C:\Program Files\Java\jdk1.8.0_181
CARBON_HOME environment variable is set to C:\PROGRA~1\WSO2\IDENTI~1\570D0D~1.0\bin\..
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[Enter KeyStore and Private Key Password :]
I have a WSO2 identity server instance that is running in a Docker container. My passwords are encrypted so I need to provide a keystore/private key password on startup.
This presents an issue though:
I have to run my docker container with the -it flag in order to create an active bash shell in the container that allows me to type in the keystore and private key password. My docker run command looks like this docker run -p 443:443 -it wso2-test .. If I don't include the -it flag, WSO2 IS will never ask for the password and the passwords won't get resolved, causing everything to fail.
I don't want to use the -it flag because it forces user input and I'd like the containers to run independently.
In order to keep things as automated as possible, I want to provide the keystore and private key password right away when I run the wso2server.sh script (which is the entrypoint of my Dockerfile), rather than when the prompt is presented. Is this possible?
Ideally, a solution would have a Dockerfile entrypoint that looks something like this:
ENTRYPOINT ["wso2server.sh", "run", "KEYSTORE_PASSWORD"]
You should pass the keystore password as an environment variable to the docker run command.
docker run -e KEY_STORE_PASSWORD=wso2carbon secvault-test:latest
This environment variable should be read by ENTRYPOINT command, and written into a file named password-tmp under the $PRODUCT_HOME directory. Here's a sample Dockerfile with ENTRYPOINT:
$> cat Dockerfile
FROM ubuntu:16.04
RUN mkdir /opt/wso2is
WORKDIR /opt/wso2is
ENTRYPOINT ["/bin/sh", "-c", "echo $KEY_STORE_PASSWORD > password-tmp && wso2server.sh run"]
Security check:
Since the password is not baked into the docker image, we can safely push the image to a registry. Further, you'll need to enter environment variable whenever you spin a new container. Note that the environment variables of the containers are visible via docker inspect command.
I installed the latest Google Cloud Deep Learning VM Image today, after VM was launched, I was able to do sudo -i successfully via SSH web.
Once I login, I start my Tensorflow model training running in background (Using &). Few hours later I'm unable to login as root.
I get the following message:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for my_username:
I tried:
sudo -i
su sudo -i
su root
I was able to replicate the issue. Any suggestions?
This issue was caused due to an internal Google side and removes the user from “Google-sudoers” group. For all affected instances, I suggest following the below workaround until the permanent fix has been rolled out.
Use a different username:
If using browser SSH window, click on the settings icon (top right), and click change Linux name in the drop down.
Using the SDK
$ gcloud compute ssh newusername#instance
Enable OS Login on the instance (set "enable-oslogin=True" in metadata) and per this article
You can track the permanent fix by following the Public Issue tracker.
The original answer:
Maybe the solution will be to add a SSH Key for Google Cloud Console and log in with another SSH client.
Additional answer:
I do not know why, but sometime the user suddenly stopped being a member of the google-sudoers group...
Then it's enough add your user to this group by some other user with administrator privileges to this group:
# usermod -G google-sudoers your_user_name
of course, if there is such a user...
I can't connect VM on GCP as root on the browser SSH.
Is there anyone who had the same problem?
the following message is displayed.
You can drastically improve your key transfer times by migration to OS login.
It might be caused to set a password...
By default, you will login as the GCP user. Now, to log in as root please run the following command once SSH browser works.
sudo -s
If you cannot login with browser SSH, then I suspect a permission issue with that particular user.
The above is the recommended way of doing things, however if logging in as root is absolutely needed, please follow the steps below:
As root, edit the sshd_config file in /etc/ssh/sshd_config:
nano /etc/ssh/sshd_config
Make sure PermitRootLogin is set to “yes” and save the /etc/ssh/sshd_config file.
Restart the SSH server:
service sshd restart
Change username to root by clicking on the wheel on the top right corner and selecting “Change Linux Username"