I would like not to tight with aws technologies and I was wondering what are the Open Source Mature Equivalent of Amazon CloudFront?
CDN is not some software which can have open-source.. It is a network of computers ( POPs ) i.e. lots of hardware, server, bandwidths are needed to implement a CDN. Hence, open source is not possible.
Regarding getting tied up with AWS; all you need to do is create a different URL suppose for all media URLS like http://cdn.example.com .. And all the static content which should be served via CDN should be put on this URL.. something like http://cdn.example.com/abc.jpg
Now, you can just create A records, B records on your DNS server to point to Amazon Cloud Front.. if suppose tomorrow you want to switch your cloud front; all you need to do is change these records on dns server.. Thats it. You are in no way tied with amazon.
Related
I am reading about Google Cloud CDN, and in that article it says it needs 4 services (I am new to Google Cloud architecture):
Cloud DNS: Safely look past the obnoxious inclusion of the word "Cloud" here (Google marketing be damned). This "service" involves DNS configuration for a given domain, such as adding A/AAAA records. This allows us to serve assets from a custom domain (such as cdn.hackersandslackers.com, in my case).
Load Balancer: It feels a bit excessive, but GCP forces us to handle the routing of a frontend (our DNS) to a backend (storage bucket) via a load balancer.
Cloud Storage: Google's equivalent to an S3 bucket. This is where we'll be storing assets to serve our users.
Cloud CDN: When we configure our load balancer to point to a Cloud Storage bucket, we receive the option to enable "Cloud CDN." This is what tells Google to serve assets from edges in its global network and allows us to set caching policies.
My actual problem is described in more detail in How could creating symlinks/aliases to files on S3 work, while still taking advantage of CDN optimizations?, but the main thing I'm wondering is, how can I create a many-to-one mapping from custom app-level IDs, to the Google Cloud bucket file? I want to have each user get a unique URL to a file, so we can measure the traffic that comes from each user's unique shared URL (so if one user shares ImageA and it only gets 1000 views per month, vs. another user sharing ImageA getting 1M views per month, they get charged differently for traffic).
So I am trying to imagine, we have a "frontend DNS" routed using a "Load Balancer" to the "Cloud Storage bucket", which is geographically distributed in the CDN somehow (not too sure how this works exactly in Google's case). Can I insert a layer, or create a simple Node.js app at the "frontend DNS" or "load balancer" layer which takes the URL, and grabs the ID from the URL, looks it up in some database (like Firebase), and then calls to the actual CDN/Cloud Storage bucket path? All-the-while keeping the benefit of the geographically distributed content on the CDN? I don't know if this is possible, and not sure where to really look if it is possible. Looking for an answer on if it's possible, and if yes, where I can look to learn more on how to add my custom little 1-filer Node.js app mapping the input ID to the bucket path.
It sounds like you can do very basic mapping with URL maps, but I am looking for inserting basically a script or app in between, to do the mapping by looking up stuff from a database. Not sure if that is possible.
I am new to server stuff.
I'd like to store some images on the server (willing to use Amazon lightsail) and
load those images from a mobile app and display them.
Can I do this with Amazon lightsail/storage/bucket and do I need to buy it?
I think I will store just a few images (probably less than 200 images, each image less than 1MB).
Not gonna upload them all at once.
I guess this is so simple question, but for beginners, it's not so simple.
Thanks in advance for any comments.
Amazon Lightsail is AWS' easy-to-use virtual private server. Lightsail offers virtual servers, storage, databases, networking, CDN, and monitoring for a low, predictable price.
The ideia of the Lightsail service is provide an easy to you host your application without the need of handle a lot of settings.
For your case, you simple need S3 buckets to host images, I do not think that Lightsail will the best option.
Simple save your files in a S3 bucket and load the images using presigned/public URLs, or via the SDK.
For more information about the Lightsail service take a look here: What can I do with Lightsail?
I think we're using S3 but I'll be sure in a couple of hours when I get into office. I'm just trying to understand the difference between AWS and a smaller site like siteground. How does it interact with an actual website. I am used to cpanel and FTP and having a whole bunch of utilities to work off of. But I don't see much support on AWS and I just want to understand the differences at this point. Can you please help
Siteground will provide you a pre-installed environment for your website set up, either for dynamic(php) or static files(html). where AWS S3 will only allow you to host static files, you may install your own web server in ec2. You might face issue while setting web server with AWS EC2.
For Comparison between both of them, you can check this article Amazon Web Services (AWS) vs. SiteGround
Thanks :)
Folks,
I've setup an SFTP server on an EC2 instance to receive files from remote customers that need to send 3 files each, several times throughout the day (each customer connects multiple times a day, each time transferring the 3 files which keep their names but change their contents). This works fine if the number of customers connecting simultaneously is kept under control, however I cannot control exactly when each customer will connect (they have automated the connection process at their end). I am anticipating that I may reach a bottleneck in case too many people try to upload files at the same time, and have been looking for alternatives to the whole process ("distributed file transfer" of some sort). That's when I stumbled upon AWS S3, which is distributed by definition, and was wondering if I could do something like the following:
Create a bucket called "incoming-files"
Create several folders inside this bucket, one for each customer
Setup a file transfer mechanism (I believe I'd have to use S3's SDK somehow)
Provide a client application for each customer, so that they can run it at their side to upload the files to their specific folders inside the bucket
This last point is easy on SFTP, since you can set a "root" folder for each user so that when the user connects to the server it automatically lands on its appropriate folder. Not sure if something of this sort can be worked out on S3. Also the file transfer mechanism would have to not only provide credentials to access the bucket, but also "sub-credentials" to access the folder.
I have been digging into S3 but couldn't quite figure out if this whole idea is (a) feasible and (b) practical. The other limitation with my original SFTP solution is that by definition an SFTP server is a single point of failure, which I'd be glad to avoid. I'd be thrilled if someone could shed some light on this (btw, other solutions are also welcomed).
Note that I am trying to eliminate the SFTP server altogether, and not mount an S3 bucket as the "root folder" for the SFTP server.
Thank you
You can create an S3 policy that will grant access only to certain prefix ("folder" in your plan). The only thing your customers need is permission to do PUT request. For each customer you will also need to create a set of access keys.
It seems you're overcomplicating. If SFTP is a bottleneck and is not redundant, you can always create a scale group (with ELB or DNS round-robin in front of it) and mount S3 to EC2 instances with sshfs or goofys. If cost is not an issue here, you can even mount EFS as NFS share.
AWS has an example configuration here that seems like it may meet your needs pretty well.
I think you're definitely right to consider s3 over a traditional SFTP setup. If you do go with a server-based approach, I agree with Sergey's answer -- an auto-scaling group of servers backed by shared EFS storage. You will, of course, have to own maintenance of those servers, which may or may not be an issue depending on your expertise and desire to do so.
A pure s3 solution, however, will almost certainly be cheaper and require less maintenance in the long-run.
There is now an AWS managed SFTP service in the AWS Transfer family.
https://aws.amazon.com/blogs/aws/new-aws-transfer-for-sftp-fully-managed-sftp-service-for-amazon-s3/
Today we are launching AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. You have fine-grained control over user identity, permissions, and keys. You can create users within Transfer for SFTP, or you can make use of an existing identity provider. You can also use IAM policies to control the level of access granted to each user. You can also make use of your existing DNS name and SSH public keys, making it easy for you to migrate to Transfer for SFTP. Your customers and your partners will continue to connect and to make transfers as usual, with no changes to their existing workflows.
I have a semi-popular Django website with postgresql backend where users share photos with one another (approx 3 are shared per minute).
The whole set up is hosted on two separate Azure VMs - one for the web application and one for the database. I use classic VMs, both are part of the same resource group, and map to the same DNS as well (i.e. they both live on xyz.cloudapp.net). I also use Azure blob storage for my images (but not for other static files like the CSS) - for this I've provisioned a storage account.
Since I'm heavily reliant on images and I want to speed up how fast my static content is displayed to my users, I want to get Azure CDN into the mix. I just provisioned one from the portal, making it part of the same resource group as my classic VMs.
Next, I'm trying to add a CDN endpoint. I need help in setting that up:
1) Can a CDN be used with classic VMs, or is it feature solely for the resource manager deployment?
2) Given 'yes' to the previous one, when one provisions a CDN endpoint, what should the origin type be? Should it be the cloud service I'm using (i.e. under which my VMs fall), OR should it be the azure storage which holds all my images? What about other static content (e.g. the CSS), which isn't hosted on Azure blobs?
3) What's the purpose of the optional origin path? Should I specify directories? What happens if I don't?
4) Will I be up and running right after the CDN endpoint is successfully provisioned? Or is there more configuration to come after this? I'm unsure what to expect, and I don't want to disrupt my live service.
Btw, going through the answer here still doesn't comprehensively answer my questions. Reason being:
1) I don't use an azure web app, I've provisioned virtual machines and done my own set up over Ubuntu
2) I'm unsure whether I'm supposed to create a new storage account for the CDN, as discussed in this question's answer.
3) Again, not being a web app, should I map the origin type to my blob service URL? The answer seems to say so, however, I do have the option of using my cloudservice DNS instead. What happens in each case?
Sounds like you have two origins, a storage account and a VM.
What you need to do here is to create two CDN endpoints, one for your pictures in the storage account, one for the css on the VM.
Let's say I created myendpoint1.azureedge.net, using the VM as an origin and I also created myendpoint2.azureedge.net, using the storage account as an origin.
If I access myendpoint1.azureedge.net/Content/css/bootstrap.css, I should be able to get same content as xyz.cloudapp.net/Content/css/bootstrap.css
If I access myendpoint2.azureedge.net/myPictureContainer/pic.jpg, I should be able to get same content as mystorageaccount.blob.core.net/myPictureContainer/pic.jpg
After all the validation is done, you need to change your html files to reference the css from the myendpoint1.azureedge.net and reference the pictures from myendpoint2.azureedge.net, and then you deploy your website. There will be no interrupt of the service.
Also, CDN can be used on any kind of origins, so yes for Classic VM. They type of the origin doesn't matter, if the url of your VM/storage is not in any of the dropdown list, then just use the custom origin and use the correct url.