I'm running a fastapi server on ec2 ubuntu. Everything work fine when I ssh in to ec2 and run commands, but I want the server to be running when my local machine is off.
So, I tried AWS System manager's run command. The connection looks fine but when I cd to the server code and run ls it outputs nothing. Also, when I do poetry run python main.py in the server folder, which works totally perfect when I ssh in to the server from my local machine, it says poetry: not found.
Why is this happening. And is there another way I can run my server while being able to turn off my local machine.
There is not any kind of relation between your machine and your server in the cloud, and your ec2 its still alive and runs your services whenever you want
Related
what I did is I deployed Tomcat application in EC2 but I install and configure it previously before launching the ec2.
basically, when your machine of app comes up, it should have the tomcat application already deployed in it
without running the shell commands manually to install and configure tomcat.
Right now the problem is when I stopped the ec2 and start it again the tomcat application was not pop up when I hit the IP.
can you please tell me how to solve this problem?
If I understood correctly, you need to run a set of commands on instance startup to configure your application/server.
You can do this with a user data script. User data is executed at launch, and you can configure it to run on restarts as well.
If I understood right, you want your tomcat to be started everytime you restart your server.
You can configure the tomcat as a linux service and enable that service so that it will start tomcat every time your system reboots. This way, you don't have to start tomcat manually each time system reboots.
Reference: https://www.digitalocean.com/community/tutorials/install-tomcat-on-linux
I'm making local cloud run services with the Cloud Code plugin to Intellij (PyCharm) but the locally deployed service cannot connect to the redis instance running in Docker:
redis.exceptions.ConnectionError: Error 111 connecting to 127.0.0.1:6379. Connection refused.
I can connect to the locally running redis instance from a python shell, it's just the cloud run service running in minikube/docker that cannot seem to connect to it.
Any ideas?
Edit since people are suggesting completely unrelated posts - The locally running Cloud Run instance makes use of Docker and Minikube to run, and is automatically configured by Cloud Code for Intellij. I suspect that Cloud Code for intellij puts Cloud Run instances into an environment that cannot access services running on MacOS localhost (but can access the Internet), which is why I tagged those specific items in the post. Please limit suggestions to ones that takes these items into account.
If you check Docker network using:
docker network list
You'll see a network called cloud-run-dev-internal. You need to connect your Redis container to that network. To do that, run this command (This instruction assumes that your container name is some-redis):
docker network connect cloud-run-dev-internal some-redis
Double check that your container is connected to the network:
docker network inspect cloud-run-dev-internal
Then connect to Redis Host using the container name:
import redis
...
redis_host = os.environ.get('REDISHOST', 'some-redis')
redis_port = int(os.environ.get('REDISPORT', 6379))
redis_client = redis.StrictRedis(host=redis_host, port=redis_port)
I am trying to host an Apache superset server on an Amazon EC2 instance. Whenever i host it from a standalone ssh, the moment i close the terminal from my laptop, the superset server shuts down..Is there a way i can host SuperSet server on Amazon EC2 instance so that it is online always ?
Run it using nohup superset <options> &. Then even after you close the terminal, it continues to run.
Stopping it should require kill the process using PID, which can be found using grep.
I have a django (1.10) app running in Elastic Beanstalk.
I want to dump some apps data to fixtures and download these fixtures to my local machine (to replicate in my local database).
So far, I've eb ssh'ed into my instance and dumped the data to ~/myapp_current.json.
But I can not find a way to copy the file to my local machine. There is no eb scp command.
When you run eb ssh locally, eb will print out the actual SSH command it's running. For instance:
INFO: Running ssh -i /Users/me/.ssh/aws.pem ec2-user#3.4.5.6
Just copy that ssh command, change ssh to scp, and then add the rest, and run it locally:
scp -i /Users/me/.ssh/aws.pem ec2-user#3.4.5.6:myapp_current.json ./
At your Environment there is the option "Application versions", in there you obtain a list of all the versions of your application that you had been upload. You can select the desired version and download it
I can set up a ipython/jupyter notebook server on aws ec2 by following this tutorial, it starts the remote server by entering $jupyter notebook in the local terminal.
However I also saw a pre-configured community AMI graphlab-create, which will run the remote server without the need of a Linux/Unix ssh client at all.
I'm wondering how that could be realized, since some students may not have an access to a linux/unix system. Any hint is appreciated.
Using windows is not an issue. I hooked up to my notebook in AWS from my home computer. I have Windows 10.
You can link up to AWS using putty.
I am using a Ubuntu AMI.
Once you have a terminal open you simply follow the instruction you gave in your link
It worked like a charm for me.