I have created a django app that contains c++ for some of the views as well as a java library. How would I deploy this app? What kind of hosting service allows for multiple languages? I have looked at EC2, GAE, and several platforms (like heroku) but I can't seem to find a definitive solution.
I have never deployed anything to the web so a simple explanation would be much appreciated.
PaaS stuff is probably not your best bet. If you want the scalability and associated buzzwords(muh 99.9999999999% availability because my servers are hosted in a parallel dimension without electrical storms, power outages, hurricanes, earthquakes, or nuclear holocausts) that comes with hosting your application on a huge web company's platform, check out IaaS(Infrastructure as a service) systems like Google's Compute Engine or AWS. With these you just get a virtual server (or servers), running your Linux distro of choice, and you can install and run whatever you please on them without being constrained to a specific platform like App Engine or Heroku(where you have to basically write your app to specifically run on that platform). If you plan on consuming a ton of bandwidth/resources from the get-go, you will almost certainly get a better deal using a dedicated server(s) from a small company.
Interested in what specifically you are executing C++ for in a Django view. Image/video processing?
Well. Deployment is not really something where a simple explanation helps much.
First I would check what the requirements to the operating system are (compilers, dependencies,…). That will maybe reduce the options quickly.
I guess that with a setup containing C++ & Java artifacts, the usual PaaS (GaE, Heroku,…) offerings will not be sufficient because they define the stack. And a mixture of Python/C++/Java is rather uncommon I'd say.
Choosing an IaaS offering (EC2, …) may be an option. There you can run your whole self-defined stack and have the possibility of easier scaling.
Hosting the application on your own server(s) is also always possible. Check your data protection regulations to find out if it's not even a requirement.
There are a lot of ways to get the Django application to run. The Django documentation has some information about deployment. If you have certain special requirements, uwsgi may be a good application server.
You may also want a web server in front of the application. Possibilities range from using uwsgi's built-in http server or using e.g. Nginx with uwsgi.
All in all every component of the whole "deployment" has hundereds of bells and whistels and it's not easy to give advice without knowing specific requirements and properties of the system itself. You'll also probably need a database you have to deploy.
But before deploying it to the web, it's also important to have a solid build process to assemble all the parts. And not only on the development machine. With three languages involved this should be the first step solve. If it easily and automagically deploys in a development environment, moving it to a server is easier.
Related
I am new to cloud and still learning GCP, I exhausted almost all my free credit for GCP within 2 months while learning different modules.
GCP is great and provides a lot of things to ease the development and maintenance process.
But I realized using different modules cost me a lot.
So I was wondering if I could have a big VM box, install MySQL, Docker, and Java and React required components by myself, I can achieve pretty much what I want without using extra modules.
Am I right?
Can I use the same VM to host multiple sites by changing ports of API, or do I need to have different boxes for that?
Your question is out of GCP domain but about IT architecture. You can create a big VM with all installed on it. But you have to manage it by yourselves and the scalability is hard.
You can also have 1 VM per website, but the management cost is higher (patching and updgrade)! However you can scale with a better granularity (per website).
The standard pattern today is to explode your monolith server into dedicated services. The database on a specific server, the docker and Java in another one, and the react in a static component (like Google Cloud Platform).
If you want to use VM, you can use GKE and you containerize your application. It's far more easier to maintain your VM with an automatic tool like K8S.
The ultimate step, is to use serverless and/or full managed solution. Cloud SQL for your database, GCS for your static content, and App Engine or Cloud Run for your backend. Like this, you pay as you use and if you website is not very used, you won't be charged on it (except for the database).
From what I know, Parse offers convenient communication stacks for various platforms such as iOS, so it is easy to build clients that use your web app.
But Parse also seems to be tightly integrated with Facebook. If you were to build a web app that does not need Facebook, but that may integrate with Facebook in the long term, is Parse the clear winner over deploying directly to AWS, or are there important disadvantages to consider?
As far as I understand their page Parse is a PaaS (platform as a service) provider like Heroku and others while AWS is a IaaS (infrastructure as a service) provider.
Pros for PaaS:
They care about the infrastructure
You build your app on an existing platform
For the start you don't need "ops-guys" as you don't do ops
You can take their knowledge and prebuilt tools for your advance
Pros for IaaS:
You have full control about the underlaying infrastructure
You can start with a greenfield and build what ever you want
You can use tools like Puppet / Chef / ... to control your servers
You don't have to pay for the additional stuff you get when using PaaS
(but have to pay your people for it)
So there is not a winner of this "battle" but you have to decide whether you want to use prebuilt tooling and give some independence for this or whether you want to have the absolute control over everything (nearly as you can't touch the hardware) and invest time and manpower into building your own tooling.
"Better, Faster, Cheaper.."
If you are pursuing mobile first strategy, Parse is a great tool for bootstrapping a mature, full web-presence from nothing more than an original beta app.
I dont have direct experience with AWS.
I have used Heroku/Parse integrating (very quickly) a stand alone mobile app with the back-end where the back end needs to cover following:
DB/persistence/noSql
Workflow - async tasks
REST API interface HTTP
Once the mobile app existed with only stubbed local data , Parse allowed a single engineer to build out ALL infrastructure mentioned above very quickly, taking the app from single user to multi-user with full DB and workflow that backs client side events with considerable server-side and cloud side business logic and process. Scaling related startup stuff that used to take weeks took only days.
The compression (time&money) when scaling up an app stack is really something. The Parse API did almost everything that i needed with one small exception (remuxing UGC media).
Personally, i abandoned the parse/android SDK in favor of a more robust REST API (threading on client-side and heavy HTTP activity ).
Developers used to Curl/REST dev stacks will take to Parse.
is there someone who has expirience in hosting django applications on ep.io?
Waht are the pros/cons on it?
I'm currently using ep.io, I'm still in development with my app but I have an app deployed and running.
When you use a service like this you go into it knowing that it isn't going to be the perfect solution for every case. Knowing the pros and cons before hand will help set your expectations so that you aren't disappointed later on.
ep.io is still very young and I believe still in beta, and isn't available to the general public. To be totally fair to them, it is still a work in progress and some of these pros and cons may change as they roll out new features. I will try and come back and update this post as the new versions become available, and my experience with the service continues.
So far I am really pleased with what they have, they took the most annoying part of developing an application and made it better. If you have a simple blog app, it should be a breeze to deploy it, and probably not cost that much to host.
Pros:
Server Management: You don't have to worry about your server setup at all, it handles everything for you. With a VPS, you would need to worry about making sure the server is up to date with security patches, and all that fun stuff, with this, you don't worry about anything, they take care of all that for you.
deployment: It makes deploying an app and having it up and running really quickly. deploying a new version of an app is a piece of cake, I just need to run one maybe two commands, and it handles everything for me.
Pricing: you are only charged for what you use, so if you have a very low traffic website, it might not cost you anything at all.
Scaling: They handle scaling and load balancing for you out of the box, no need for you to worry about that. You still need to write your application so that it can scale efficiently, but if you do, they will handle the rest.
Background tasks: They have support for cronjobs as well as background workers using celery.
Customer support: I had a few questions, sent them an email, and had an answer really fast, they have been great, so much better then I would have expected. If you run your own VPS, you really don't have anyone to talk to, so this is a major plus.
Cons:
DB access: You don't have direct access to the database, you can get to the psql shell, but you can't connect an external client gui. This makes doing somethings a little more difficult or slow. But you can still use the django admin or fixtures to do a lot of things.
Limited services available: It currently only supports Postgresql and redis, so if you want to use MySQL, memcached, mongodb,etc you are out of luck.
low level c libs: You can't install any dependencies that you want, similar to google app engine, they have some of the common c libs installed already, and if you want something different that isn't already installed you will need to contact them to get it added. http://www.ep.io/docs/runtime/#python-libraries
email: You can't send or recieve email, which means you will need to depend on a 3rd party for that, which is probably good practice anyway, but it just means more money.
file system: You have a more limited file system available to you, and because of the distributed nature of the system you will need to be very careful when working from files. You can't (unless i missed it) connect to your account via (s)ftp to upload files, you will need to connect via the ep.io command line tool and either do an rsync or a push of a repo to get files up there.
Update: for more info see my blog post on my experiences with ep.io : http://kencochrane.net/blog/2011/04/my-experiences-with-epio/
Update: Epio closed down on May 31st 2012
I'm not the only one with this question, but haven't found a lot of information in my research so far, so help me out.
We are a small IT crowd in an organization. We're looking to build a small, private service that would emulate a heroku/gae workflow. The basics of this: deploy an app as a git repository, and have it scale in a 'cloud' environment. Basically, a platform as a service (Paas).
Pretend we are amateur PM's, programmers, and sysadmins tasked with this. What would you recommend? We know generally what is needed: some sort of routing, database, caching, authentication, etc. What other tools do we need?
We would prefer tools along a ruby/python/haskell/erlang dimension, on a linux/bsd stack, with postgres databases(couchdb or cassandra in the future). We are not touching anything in the ms/.net area, nothing on the JVM (We've looked at Steamcannon, but no; Scala and Clojure tools are not entirely out of the question). We have a basic grasp of bootstrapping a cloud (e.g. Eucalyptus) to build on. We have an understanding of the basics in server admin, and the physical infrastructure limitations aren't a factor right now.
We're not looking into why gaerokuyardspace is the best choice, a list of such services, why we should ditch our plans for one of these services, or an argument against this plan. For this situation the decision has been made that the cost to build privately is more attractive than the cost of deploying elsewhere. We already know why and how for these services. We're looking to emulate and build upon these for private needs.
A short list of tools to be expanded:
Beehive
Steamcannon
Gitosis/Gitolite
?
Basically, I'd like to generate a list of tools for building heroku/gae like service on a small, private, definitely experimental/toy level.
I don't know that it will meet all of your stated needs today, but you should take a look at Cloud Foundry from VMware. You can check the FAQ for the commercial project or look in to the Open Source version that you can host and manage yourself.
Some combination of Cloud Foundry (above) gitolite, and fabric
will probably do well for you. Any such solution will take some time to get right.
(Disclaimer: I'm a lead developer on the AppScale project)
AppScale is pretty much right up your alley, especially if you're looking to run Google App Engine apps in your own private cloud. It's open source, so grab it and extend it if there are other types of apps you want to support (and definitely commit it back to us if you do).
At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.