Unable to connect remotely from other devices + Vorlon - remote-debugging

Vorlon dashboard is not showing all connected devices.
I have desktop and installed a Vorlon, I am using localhost of this desktop from laptop or ipad its displays webpage but it is not showing devices in vorlon, but it show if i open from the same desktop.

I was my bad. I am using localhost:1337 to load vorlon.js. We have to use ip address instead of localhost from other machine

There can be a few different problems at play here.
First you need to find the IP address of the computer that is hosting your application/site i.e. 192.168.10.4.
start vorlon
c:/> vorlon -v
start webserver, ensure you use the same IP address found above
c:/>python web2py.py --i 192.168.10.4 -p 8000
add script tag for vorlon
Both python and nodejs have to be allowed through the firewall, so next add rules for both the python executable you are using (which could be a virtualenv) and nodejs.
Finally you need to ensure that there are no other firewall rules that match the ports or applications you use, this is critical as I had a conflicting firewall rule for nodejs that was blocking the connection to nodejs and hence the script /vorlon.js cannot be found.
Hope this helps.

Related

How to host a web page that was made using Wt(C++ Web toolkit)?

I have created a small project using Wt(C++ Web Toolkit). and I now want to host it.
Since there are very sparse resources for this, I read somewhere that LAMP is required for this(since I am using Linux). So after creating an Ubuntu instance at Digital Ocean, I installed my code in it with Apache2, MySql, and PHP. The Apache2 server is working, but it is used for hosting HTML/CSS/JS files. my web page is written entirely in C++.
I got directed to this website: https://www.webtoolkit.eu/wt/doc/reference/html/overview.html#wthttpd where it mentions "connectors" like libwthttp and libwtfcgi. I tried to install them using apt but I get this error: E: Unable to locate package libwtfcgi-dev (same for libwthttp).
The website mentioned above also does not provide clear steps for using these connectors.
I have also looked at other related answers : Host for Wt C++ web framework, deplowment issue but since I am new to web hosting, I would really appreciate a step-by-step guidance.
Even here: https://redmine.webtoolkit.eu/projects/wt/wiki/Fastcgi_on_apache The language is very unclear ast to where fastcgi.conf is located.
This answer assumes that you have the project running locally and just need to 'get it on the web'. You have a few options...
Reverse Proxy
From the tutorial it looks like there is a built in web-server. One option could be to use this, and then setup a reverse proxy (this can be done through Apache) to map traffic from yourwebsite.com to http://localhost:port, where port is whatever you started the web-server with.
The example they give to start the local webserver is:
$ g++ -std=c++14 -o hello hello.cc -lwthttp -lwt
$ ./hello --docroot . --http-address 0.0.0.0 --http-port 9090
here, port would be 9090.
Fast CGI
From the hello world and this thread it seems like a webserver setup with FastCGI may be able to do what you are looking for. Maybe this doc from Digital Ocean can help you get started?
It seems unlikely that you specifically need the full LAMP stack (Linux, Apache, MySQL, PHP) specifically for this. The doc you link does suggest that you need to link against the appropriate library for whichever approach you take, libhttpd or libwtfcgi, and you are correct that apt would be the place to get them on Ubuntu. This is a separate issue, but maybe this answer could be a starting point?
Is LAMP needed?
LAMP is definitely not required by Wt:
Linux is supported, but Windows too.
Apache could be used as a Reverse Proxy, but some alternatives may be even better (f.ex. HAProxy)
MySQL is supported, but also other database backends such as Postgres.
PHP/Perl/Python: not needed in any way.
Reverse proxy
Using a reverse proxy is a great way to deploy your Wt application (see eisaac's answer). If you already use Apache for other websites on your server. It's perfectly ok to use it as a reverse proxy. If you don't need all the features of Apache, using a HAProxy may be the better choice. See also some deployment configuration mentioned in the Wt docs:
https://redmine.emweb.be/projects/wt/wiki/Wt_Deployment
Easiest solution (but not scalable): let Wt directly listen on port 80 (or port 443)
If Wt is the only site running on your Ubuntu instance, you can also make it listen directly on port 80 (or port 443 in case of https):
./hello --docroot . --http-address 0.0.0.0 --http-port 80
See also the different command line options in the Wt doc: https://www.webtoolkit.eu/wt/doc/reference/html/overview.html#config_wthttpd

How to access Django app in docker container from another machine?

I am pretty new to Docker and Django. So what i did was, putty to a linux server and created a folder in the root directory and then using django-admin startproject I started a new project.
Now, since I am using putty for ssh terminal access, I will not be able to access a browser in the linux machine and then ping 127.0.0.1:8000 to see whether "congratulations!" screen by Django is visible or not.
So I assumed that the server might be running after runserver command. Then using docker I prepared a container in the linux machine where I have exposed port 9000. Also I cannot access this container since I cannot access the browser in the linux machine. Now, I have three questions below:
1.) How to access this docker container (inside the linux machine) using my windows machine? By this I mean, if I open up lets say google chrome browser in the windows machine, and enter some url:port, will I be able to see the "congratulations!" screen on my browser on windows?
2.) I am pretty confused with how this container network port and ip works (I mean how does host or any other pc access this docker container) I tried looking up on many documentation and youtube videos but I am very much confused. Because I know to make your website/app accessible to the external world we need domain name hosted on some cloud for which we need to pay, but how can docker do this for free? Might sound like a lame one, but please help me understand.
3.) How should my docker run command look like for accessing from my windows machine?
My dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 9000
CMD python manage.py runserver 0.0.0.0:9000
I am using the following command to build:
docker build -t myproj .
Please help clarifying my questions guy. I'll be forever grateful :)
Thanks all!
When you run the container, you need a docker run -p option:
docker run -p 12345:9000 myproj
The second port number must match the port number the actual server process is listening on (in your case, the port argument to ./manage.py runserver). The first port number can be any port number that's not otherwise in use on the host system.
Then (up to networking and firewall constraints) another system can reach the container by using the host's IP address and the first port number; http://my-dev-system.internal.example.com:12345. If you're calling from the host directly then these two systems are the same and in this special case you can use http://localhost:12345.
As an implementation detail the container happens to have its own IP address but you never need to look it up or use it. (Among other problems, it is not accessible from other machines.) From other systems' points of view a Docker-based process is indistinguishable from the process running directly on the host. Docker doesn't address the problems of needing somewhere to host the application, coming up with a DNS name for the host, or other similar concerns.
Try running it without EXPOSE 9000, when you are exposing port it's visible only inside of a container and not to the outer world. After doing so, go to a browser and navigate to <server_ip>:9000 and you will probably see the message.

Remote debugging .net core 2.0 console app over ssh

I am building a .net core 2.0 console app on windows 10 but I want to debug it on a remote linux server running debian 9.
I found this article:
https://blogs.msdn.microsoft.com/devops/2017/01/26/debugging-net-core-on-unix-over-ssh/
but where I get stuck is selecting the SSH connection. My remote server has authentication and if I enter the user#ip:port it doesn't find anything.
I found some mention of using SSH tunnelling but as there is no dotnet process listening on the server (it's installed but it doesn't have any listening service running I can see) I am unsure exactly what port I'm meant to be tunnelling or even which direction to tunnel it.
What do I need to do to get my SSH connection visible in the debugger?
I just tried this and I found that the Find.. button doesn't do anything either.
First you enable SSH connections in your Linux host (in my case, Ubuntu, I had to run sudo ufw allow ssh). Test things out by opening cmd on Windows and doing ssh user#host.
Then, on Visual Studio, in the SSH attach to process window, make sure you hit "refresh" and check the "show processes from all users" box. You should see the "dotnet" process running.
EDIT: you should be prompted for the remote host's password at some point. Here's the dialog shown when I changed the password on the remote host and then attempted to debug.

How to set a remote django develop environment?

I have to set a development environment on a ubuntu machine(16.04).
It's django+postgresql+Nginx, I.think I could install all these things together on that machine,but I totally don't have any idea about how to connect it by using pycharm running on my pc, and how to manipulate the database.
Is there anyone could tell me how to connect it. This is the first time I have to use a remote machine.
By the way, my pc and ubuntu machine are in the same LAN, but there is another person who were asked to write db are not.
I hope I could get some suggestions from the community.
One of the best and common way is to use SSH.
Here you can find an official guide on how to enable SSH server on Ubuntu 16.04
You can use Putty to connect from Windows PC to your SSH server or if you're using Mac or Linux, there should be already installed SSH client. So, just ssh username#servername
Use SSH. Assuming you're on Linux, open a terminal and type:
ssh username#local-ip-address-of-machine
and then type your password when prompted. Your terminal window essentially acts as a terminal on your remote machine. From here, I suggest you research Docker in order to set up a custom environment for your Django project. I have only told you how to connect since that's your question, and there are plenty of tutorials on setting up Django and Docker. If the other person wants to connect, you will need to port forward on your router port 22 on the local IP of the machine.
If you haven't got SSH set up, this page tells you how.

VSTS Task: Window machine file copy: system error 53

I'm trying to make a release from VSTS to a VM(running on AWS) that is running an IIS. For that I use three tasks.
Windows Machine File Copy
Manage IIS App
Deploy IIS App
Before the release I'm running a build pipeline that that gives me an artifact containing the web app (webapp.zip).
When I manually put it on the server I can run step 2 and 3 of my release and the application works. The problem I have is that I don't get the Windows Machine File Copy to work. It always throws an exception giving a 'System Error 53: The network path was not found'. Of course the machines are not domain joined, because I'm running my release on VSTS and need the files on a AWS VM. I tried to open port 445 (for file sharing) and made sure the user has rights for the destination path on the target machine.
So my question is: How can I actually move the files from VSTS to the AWS VM if the two machines are not joined.
Using FTP Upload or cURL upload step/task instead.
Regarding how to create FTP site, you can refer to this article: Creating a New FTP Site in IIS 7.
Disclaimer: this answer merely explains how to fulfill the requirements to use tasks of Windows Machine File Copy and Manage/Deploy IIS tasks.
Please always be concerned about security of your target hosts, its hardening and security assessment is absolutely necessary.
As noted in comments, you need to protect the channel of deployment from the outside world, here an high level example:
Answer:
in order to use the Windows Machine File Copy task you need to:
on the target machine (the one running IIS) enable File and Printer Sharing running the following command from administrative command prompt:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes
assure that on the target machine PowerShell 4 or more recent is installed; the following executed from a PS command prompt prints the version installed on the local machine:
PS> $PSVersionTable.PSVersion
To get PowerShell 5 you could for example install WMF 5
;
on the target machine you must have installed .NET Framework 4.5 or more recent;
For the other two tasks (Manage/Deploy IIS Task), both require you to enable a WinRM HTTPS listener on the target machine. For development deployment scenario you could follow these steps:
download the ConfigureWinRM.ps1 PowerShell script at from the officaial VSTS Tasks GitHub repository;
enable from an Administrative PowerShel command prompt the RemoteSigned PowerShell execution policy:
PS> Set-ExecutionPolicy RemoteSigned
run the script with the following arguments:
PS> ConfigureWinRM.ps1 FQDN https
Note that FQDN is the complete domain name of your machine as it is reached by the VSTS task, e.g. myhostname.domain.example .
Note also that this script downloads two executables (makecert.exe and winrmconf.cmd) from Internet, so this machine must have Internet connection. Otherwise just download those two files, place them sibling to the script, comment out from the script the Download-Files invocation.
Now you have enabled a WinRM HTTPS listener with a self signed certificate. Remember to use the "Test Certificate" option (which ironically means to not test the certificate, better name would have been "Skip CA Check") for those two tasks.
In production deployment scenario you may want to use instead a certificate which is properly signed.
Windows File Copy is designed to work on the same network and enabling it on the internet would open your server for hacking. It's designed for internal networks. FTP would also result in a significant security risk unless managed properly.
The easiest way to move forward would be to run an Agent on the VM in AWS that you want to release to. The agent will then download the artifacts to the AWS VM and run whatever tasks you need to install.
This allows you to run tasks on the local machine without opening it up to security risks.
If you had multiple machines that you need to manage in AWS you can easily create a local network that will allow your single agent to use Windows File Copy to push files to multiple VM's without risk.