Is it possible to get the list of all instances in a DigitalOcean App(PaaS) - digital-ocean

As I know, I can use api
https://developers.digitalocean.com/documentation/v2/#update-an-app
to scale up/down an app dynamically.
I hope to get the IP address of all instances.
But I can not find any API that can get the list of all instances in a DigitalOcean App.
Your comment welcome

IIUC, this information is inaccessible.
DigitalOcean Apps is a platform atop Kubernetes (Knative?) and the IPs of the containers that are your apps' workers aren't exposed (and are probably non-routable).
The load-balancer is possibly all you can get.

Related

How to deploy many applications with subdomains in AWS? Which config should I use?

I'm new to AWS and deployment. Here's my doubt.
I've a domain, let's say www.company.com. I've 3 apps to deploy,
NodeJS Backend (api.clientName.company.com)
React Marketplace (clientName.company.com)
React Admin Panel (admin.clientName.company.com)
So I'll customize as per clients requirements and deploy like clientName.company.com.
What config should I use for many clients? Now I got 5 more new clients. Can I create only one EC2 instance and manage all clients apps together? Also can I migrate my domains to route53 so that I can create unlimited subdomains?
Can anyone explain my doubts.
Yes, you can manage multiple applications on single EC2 instance, but make sure the instance size is large enough to withstand the traffic.
You can achieve this by using Laravel Forge in which you can deploy Laravel and simple PHP Applications as well. It just makes things easier to manage.
If you not comfortable using Laravel Forge you can use Filezilla too.
It is better if you migrate domains into Route53 as it will be easier to manage and everything will be in one place.
You can create the subdomains as you like and map it to the EC2 instances Elastic IP, make sure you attach an EIP so that the IP address does not change after a reboot.

Hosting multiple web apps on AWS

I running different multiple websites on an EC2 instance separated by vhost.conf but I sometimes run out of resources as some web apps consume more than others. Not only that, I also get experience down time if one site generates config errors in apache -> all servers will simply stop responding. The other thing is I also desire to run inginx and other web servers
My Supposing Solutions are:
1.To create an ECS instance and containerise each website on separate EC2 inside ECS, meaning I will have multiple instances of EC2 in ECS.
2.Create containers on an already existing EC2 for each site, but the downside is I will still have a single point of failure.
Kindly advise on the best approach and/or add on other suitable suggestions

Sails app with multiple instances on AWS - Redis/Elasticache/ALB

I'm building a Sails app that is using socket.io and see that Sails offers a method for using multiple servers via redis:
http://sailsjs.org/documentation/concepts/realtime/multi-server-environments
Since I will be placing the app on AWS, preferably with ELB (elastic load balancer) and autoscale group with multiple EC2 instances was wondering how I can handle so it doesn't need a separate redis instance?
Maybe we can use AWS Elasticache? If so - how would this be done?
Now that AWS has released the new ALB application load balancer which has websockets, could this be used to help simplify things?
Thanks in advance
Updates for use-cases in application
Allow end-user to update data dynamically from their own dashboard
and display analytics/stats in real-time to an administrator
Application status' to change based on specific timings eg. at a
given start date/time the app allows users to update data.
Regarding your first question, you don't want to run Redis on the same servers that Sails is running on, especially if you are using AutoScaling. The Redis server needs to be a separate server that won't disappear if your environment experiences a "scale-in" event. So Redis is going to have to be on a separate "server" somewhere.
ElastiCache is just separate EC2 instances, running Redis, where AWS handles most of the management for you to the point that you can't even SSH into the instance. It's similar to how RDS works. ElastiCache will certainly work for your scenario. You might also want to look at the third-party service RedisLabs which also manages Redis instances on AWS for you.
Regarding your second question, an Application Load Balancer will have no bearing on your Redis usage. It will however bring actual support for WebSockets which it sounds like you are using. So yes, you should be using an ALB instead of an ELB.

Deploy and manage WebApp with AWS services

I’m noob with AWS services, I develop web application with Ruby on Rails, so, I’ll like to know what could be the best way or the right one to deploy and manage web application with AWS.
Right now there are bunch of services of AWS for handle web apps, but I’m not sure which service to use, OpsWork, EC2 (setup the entire server), Elastic Beanstalk or EC2 Containers and so on…
Well, I have 3 small apps from diferentes clients and I’m looking the right way to have them on one instance or couples of instances, right know i’m with OpsWorks, I have 3 stack, one for each web app, I want to know if I can deploy and manage those apps in one stack and 2 instance of OpsWorks or there are better way or other services as IaaS or PaaS solutions?. So i’m looking for advise or orientation for use AWS service for those kind of thing.
This question is rather vague and the answer depends on the needs of your app, but I'll give my 2 cents regardless. I have several rails apps hosted on EC2 instances running Ubuntu, NGINX, and Phusion Passenger. The apps that receive a decent amount of traffic and require consistent performance/availability are cloned across multiple EC2 instances (in multiple zones) and have traffic managed by Elastic Load Balancers (ELBs). The app databases are served through amazon's RDS services. Domain registration and nameservers are set up through AWS Route 53. Static assets are served from AWS S3.
This type of architecture certainly has a price tag on it and isn't the only way to do it. My experience has been that all of my older Rails apps have survived over a year with 100% uptime and rarely have moments of slowness been the fault of AWS as opposed to my own code or 3rd-party software.
Hope this helps; feel free to ask questions.

Heroku + Amazon EC2 Security groups

So i have read many things about connecting Heroku Django App to my Amazon EC2 (which will serve as an external API to my site) and most of the solution were not free or not advised anymore.
I have 2 problems.
1 - Basically my problem is related to Amazon's security groups, where i need to add a static IP to allow external connection's into amazon. The problem is that Heroku's dynos are dynamic, and to make them static i need a NON FREE add-on, something like Prometheus for example.
I was looking for a free solution. Any ideas on what i can do to achieve that?
2 - Amazon EC2 instances have dynamic Ips also, whats the best solution (free) to be able to have static ips so i can give to my Heroku app and tell it to always connect to that API sitting on amazon ec2?
Thanks in advance
There is no way to reliably have static IPs for your dynos. You'll have to do in this case with out firewall-level security (meaning, opening up your relevant ports for 0.0.0.0/0, and make sure that you have proper authentication mechanisms for your API, such that no malicious user can gain access to your data. Most web frameworks support this out of the box, or have modules for API authentication (oauth comes to mind.)
Amazon EC2 IPs are static. You can request for a new IP and attach it to a running instance. Unless explicitly detached and released, it will remain static for as long as the instance is alive.