Sharing Cloud Foundry spaces - why and when - cloud-foundry

Trying to understand why it's a good idea for a distributed dev team to work in "one" common CF org and space... and what the downside is of not sharing the same space.
After talking to several folks quite knowledgeable of CF, my conclusion is that it's a good idea to share the space for the following two reasons:
1. if there is a high inter-dependency across individually developed / deployed services, .. and / or ...
2. is there is a fee for each service instance, and hence you would want to limit the number of service instances per project
In all other scenarios, it does not really matter. Developers can collaborate on GitHub, and do pushes within their own individual spaces.
Did I get that right?

I think I can explain the base idea of spaces in cloud foundry. ( I don't sure what is your project and if it'll fit you)
If you don't use a cloud, usually you will have 3 servers (minimum):dev,stage, and prod. (again on some type of projects you don't need it)
The dev space is for the developers (let's say you creating a website, one of your developers finished a page in the site he\she uploads it to dev space).
You and your devs can see the status of the deployment in this space
once you think dev is stable enough (or end of the sprint if you in agile methodology) you move dev space to staging
in staging usually QA looking for defect and make sure everything works (any bug found in here is opened on the dev team which fixing it on dev and so on)
once staging is tested you move it to prod
cloud foundry supports this mechanism using spaces
org is for different projects
all I said here isn't enforced in any way by cloud foundry eventually you can choose the best way that working for you

Related

cloudfoundry : space sharing approach with multiple developers

with the cloudfoundary landscape on sap cloud platform , i was looking at approach / best practise when multiple developers work on the same app.
Should each developer have a seperate space and must push the app each time they want test ?
or
Should there be one "devopment" space with multiple apps and some kind of service sharing among the different apps that each developer will push ? . In this approch i do see the issues if i change a service (eg db) that might effect other apps that share it.
I went through the documentation but did not get any hints about multiple developers working in the space , so any advice / experiences is highly appreciated
I don't think anyone can tell you what makes sense for your organization. You need to think about how you want to organize your users, applications & spaces. Quota management plays into this, but so does access to apps & services.
Things to consider:
Who needs access to the apps you're putting into the space? You can control this with the space roles (space dev, manager & auditor).
How to you want to manage the spaces? This is going to give you some insight into how you want to structure your orgs. Org permissions allow an operator to delegate management to someone else (org manager, auditor roles).
How do you need to be able to restrict the resources that your uses consume? i.e. quotas. You can apply quotas to orgs & spaces. Consider how you want to restrict what groups can do.
Consider org/space boundaries. Will you have need to share things like services across orgs & spaces? There is some capacity to do this through the foundation, but be careful because individual service brokers need to support this also. At the time of writing some do and some do not. Those that do not would need to be shared with the must less convenient user provided service.
https://docs.cloudfoundry.org/devguide/services/sharing-instances.html
Consider if you need to do billing or charge-back. Doing this per org/space makes sense, so you would need to align that with how you're doing billing/charge-back.
I wouldn't suggest adopting one of these strategies without thinking it through first, but here are a couple examples of what I've seen people do.
In teams where the developers manage throughout the full cycle, I've seen orgs be used to group development teams and spaces use to group project or apps. Thus team A has access to org A, which has spaces X, Y and Z for App X, Y and Z. App X, Y and Z deploy dev, test, qa and prod into the same space.
In companies with more traditional structures where dev teams pass off code to an ops team, I typically see orgs and spaces to facilitate that separation. Devs have their own orgs & spaces for development & testing. Ops has their own orgs and spaces for production. The two do not mix.
There are also variations on this, so again think through what makes sense for you and your company.
Hope that helps!
The spaces on Cloud Foundry have the functional purpose of providing quota management and control management to a group of people. As such, you should structure the spaces, depending on your need to isolate/control applications.
The most common use case for spaces are:
- dev / qua / prod
- department_a / deparment_b;..
Regarding the development of shared services, please make sure you understand about CF services, they provided different mechanism from regular apps, for example, a service instance (like a DB), can be shared between different applications on different spaces:
https://docs.cloudfoundry.org/devguide/services/sharing-instances.html

Deploy hyperledger on AWS - production setup

My company is currently evaluating hyperledger(fabric) and we're using it for our POC. It looks very promising and we're targeting rolling out to production in next few months.
We're targeting AWS as our production environment.
However, we're struggling to find good tutorial/practices/recommendations about operating hyperledger network in such environment.
I'm aware that Cello is aiming to solve/ease deploying/monitoring hyperledger network but i also read that its not production ready yet. Question is, should we even consider looking at Cello at this point?
If not, what are our alternatives? Docker swarm, kubernetes?
I also didn't find information about recommended instance types. I understand this is application and AWS specific but what are the minimal system requirements
(memory&CPU&network) for example for 'peer' node (our application is not network intensive, nor a lot of transactions will be submitted per hour/day, only few of them per day).
Another question is where to create those instances on AWS from geographical&decentralization point of view. Does it make sense all of them to be created in same region? Or, we must create instances running in different regions?
Tnx a lot.
Igor.
yes, look at Cello.. if nothing else it will help you see the aws deployment model.
really nothing special..
design the desired system, peers, orderer, gateways, etc..
then decide who many ec2 instance u need to support that.
as for WHERE (region).. depends on where the connecting application is and what kind of fault tolerance you need for your business model.
one of the businesses I am working with wants a minimum of 99.99999 % availability. so, multi-region is critical. its just another ec2 instance with sockets open from different hosts..
aws doesn't provide much in terms of support for hyperledger. they have some templates which allow you to setup the VMs initially, but that's stuff you can do yourself as well.
you are right, the documentation is very light and most of the time confusing. I got to the point where I can start from scratch with a brand new VM and got everything ready and deploy my own network definition and chaincode and have the scripts to do that.
IBM cloud has much better support for hyperledger however. you can design your network visually, you can download your connection profiles, deploy and instantiate chaincode, create and join channels, handle certificates, pretty much everything you need to run and support such a network. It's light years ahead of AWS. They even have a full CI / CD pipepline that you could replicate for your own project. if you look at their marbles demo, you'll see what i mean.
Cello is definitely worth looking at, with the caveat that it's incubation meaning, not real yet, not production ready and not really useful until it becomes a fully fledged product.

django deployment with java and c++

I have created a django app that contains c++ for some of the views as well as a java library. How would I deploy this app? What kind of hosting service allows for multiple languages? I have looked at EC2, GAE, and several platforms (like heroku) but I can't seem to find a definitive solution.
I have never deployed anything to the web so a simple explanation would be much appreciated.
PaaS stuff is probably not your best bet. If you want the scalability and associated buzzwords(muh 99.9999999999% availability because my servers are hosted in a parallel dimension without electrical storms, power outages, hurricanes, earthquakes, or nuclear holocausts) that comes with hosting your application on a huge web company's platform, check out IaaS(Infrastructure as a service) systems like Google's Compute Engine or AWS. With these you just get a virtual server (or servers), running your Linux distro of choice, and you can install and run whatever you please on them without being constrained to a specific platform like App Engine or Heroku(where you have to basically write your app to specifically run on that platform). If you plan on consuming a ton of bandwidth/resources from the get-go, you will almost certainly get a better deal using a dedicated server(s) from a small company.
Interested in what specifically you are executing C++ for in a Django view. Image/video processing?
Well. Deployment is not really something where a simple explanation helps much.
First I would check what the requirements to the operating system are (compilers, dependencies,…). That will maybe reduce the options quickly.
I guess that with a setup containing C++ & Java artifacts, the usual PaaS (GaE, Heroku,…) offerings will not be sufficient because they define the stack. And a mixture of Python/C++/Java is rather uncommon I'd say.
Choosing an IaaS offering (EC2, …) may be an option. There you can run your whole self-defined stack and have the possibility of easier scaling.
Hosting the application on your own server(s) is also always possible. Check your data protection regulations to find out if it's not even a requirement.
There are a lot of ways to get the Django application to run. The Django documentation has some information about deployment. If you have certain special requirements, uwsgi may be a good application server.
You may also want a web server in front of the application. Possibilities range from using uwsgi's built-in http server or using e.g. Nginx with uwsgi.
All in all every component of the whole "deployment" has hundereds of bells and whistels and it's not easy to give advice without knowing specific requirements and properties of the system itself. You'll also probably need a database you have to deploy.
But before deploying it to the web, it's also important to have a solid build process to assemble all the parts. And not only on the development machine. With three languages involved this should be the first step solve. If it easily and automagically deploys in a development environment, moving it to a server is easier.

Django website accessible to others just for testing

Right now the website is running locally and I'm still working on it.
While doing this I also have to make it visible to a specific group of users as I need their feedback in order to add/change features, etc.
I've tried to find a free web hosting without any luck (see dependencies).
I was thinking to create a VPN but then I will have to use my PC as a host for a virtual machine which is by far not what I'm looking for.
Therefore, my questions are:
1. Which is the best way to achieve this (website visibility for TESTING) fast and easy?
2. If a dedicated web host is the best solution, please point me to an easy-to-use and cheap one. What I've tried so far: elastichosts, alwasydata, stackable, 1FreeHosting and probably others I don't remember right now. For a reason or another I couldn't use none of the above.
Another aspect to be considered: I want this only for simple testing and I don't need a lot of server resources. Also the traffic will be very low as there are only 5 testers. That's why I wouldn't pay too much for it. I will probably need this temporary web hosting for 2-3 months.
Dependencies:
- as the website uses mezzanine, for the moment I only need mezzanine's dependencies.
Thanks in advance!
You can always just setup port forwarding on your router. This would allow your testers direct access to your app. Though this might give your PC more exposure than you want.
Heroku has a free tier.
In your non free options, an instance at linode costs $20/month, but requires some setup. Rackspace has similar options in their cloud servers line. Both are no contract servers.
My blogpost covers gracefully deploying a Mezzanine site. The monthly hosting cost is nothing compared to the cost of a slow, painful deployment process.
An EC2 micro-instance right now costs as little as ~US$3.50/month. I create and destroy staging servers on EC2 servers for testing and sharing with others.

Migrate hosted LAMP site to AWS

Is there an easy way to migrate a hosted LAMP site to Amazon Web Services? I have hobby sites and sites for family members where we're spending far too much per month compared to what we would be paying on AWS.
Typical el cheapo example of what I'd like to move over to AWS:
GoDaddy domain
site hosted at 1&1 or MochaHost
a handful of PHP files within a certain directory structure
a small MySQL database
.htaccess file for URL rewriting and the like
The tutorials I've found online necessitate PuTTY, Linux commands, etc. While these aren't the most cumbersome hurdles imaginable, it seems overly complicated. What's the easiest way to do this?
The ideal solution would be something like what you do to set up a web host: point GoDaddy to it, upload files, import database, done. (Bonus points for phpMyAdmin being already installed but certainly not necessary.)
It would seem the amazon AWS marketplace has now got a solution for your problem :
https://aws.amazon.com/marketplace/pp/B0078UIFF2/ref=gtw_msl_title/182-2227858-3810327?ie=UTF8&pf_rd_r=1RMV12H8SJEKSDPC569Y&pf_rd_m=A33KC2ESLMUT5Y&pf_rd_t=101&pf_rd_i=awsmp-gateway-1&pf_rd_p=1362852262&pf_rd_s=right-3
Or from their own site
http://www.turnkeylinux.org/lampstack
A full LAMP stack including PHPMyAdmin with no setup required.
As for your site and database migration itself (which should require no more than file copies and a database backup/restore) the only way to make this less cumbersome is to have someone else do it for you...
Dinah,
As a Web Development company I've experienced an unreal number of hosting companies. I've also been very closely involved with investigating cloud hosting solutions for sites in the LAMP and Windows stacks.
You've quoted GoDaddy, 1And1 and Mochahost for micro-sized Linux sites so I'm guessing you're using a benchmark of $2 - $4 per month, per site. It sounds like you have a "few" sites (5ish?) and need at least one database.
I've yet to see any tool that will move more than the most basic (i.e. file only, no db) websites into Cloud hosting. As most people are suggesting, there isn't much you can do to avoid the initial environment setup. (You should factor your time in too. If you spend 10 hours doing this, you could bill clients 10 x $hourly-rate and have just bought the hosting for your friends and family.)
When you look at AWS (or anyone) remember these things:
Compute cycles is only where it starts. When you buy hosting from traditional ISPs they are selling you cycles, disk space AND database hosting. Their default levels for allowed cycles, database size and traffic is also typically much higher before you are stopped or charged for "overage", or over-usage.
Factor in the cost of your 1 database, and consider how likely it will be that you need more. The database hosting charges can increase Cloud costs very quickly.
While you are likely going to need few CCs (compute cycles) for your basic sites, the free tier hosting maximums are still pretty low. Anticipate breaking past the free hosting and being charged monthly.
Disk space it also billed. Factor in your costs of CCs, DB and HDD by using their pricing estimator: http://calculator.s3.amazonaws.com/calc5.html
If your friends and family want to have access to the system they won't get it unless you use a hosting company that allows "white labeling" and provides a way to split your main account into smaller mini-hosting accounts. They can even be setup to give self-admin and direct billing options if you went with a host like www.rackspace.com. The problem is you don't sound like you want to bill anyone and their minimum account is likely way too big for your needs.
Remember that GoDaddy (and others) frequently give away a year of hosting with even simple domain registrations. Before I got my own servers I used to take HUGE advantage of these. I've probably been given like 40+ free hosting accounts, etc. in my lifetime as a client. (I still register a ton of domain through them. I also resell their hosting.)
If you aren't already, consider the use of CMS systems that support portaling (one instance, many websites under different domains). While I personally prefer DotNetNuke I'm sure that one of its LAMP stack competitors can do the same for you. This will keep you using only one database and simplify your needs further.
I hope this helps you make a well educated choice. I think it'll be a fine-line between benefits and costs. Only knowing the exact size of every site, every database and the typical traffic would allow this to be determined in advance. Database count and traffic will be your main "enemies". Optimize files to reduce disk-space needs AND your traffic levels in terms of data transferred.
Best of luck.
Actually it depends upon your server architecture, whether you want to migrate whole of your LAMP stack to Amazon EC2.
Or use different Amazon web services for different server components like Amazon S3 for storage and Amazon RDS for mysql database and so.
In case if you are going with LAMP on EC2: This tutorial will atleast give you a head up.
Anyways you still have to go with essential steps of setting up the AMI and installing LAMP through SSH.