Assets in AWS S3 bucket blocked in Iran - amazon-web-services

I have a bucket where I store some images and short clips to use inside my app. but I've noticed that users in Iran cannot see the images and watch the videos without using a proxy.
Is there any solution that those people be able to see images and watch the videos?
My bucket is public, located within the Asia Pacific (Singapore) ap-southeast-1 region.

Is there any solution that those people be able to see images and watch the videos?
No, unfortunately.
In compliance with (extreme) United States government sanctions & export control regulations, Amazon Web Services prohibits access to customers located within Iran.
Amazon Web Services is theoretically exempted from US sanctions targeting Iran according to the Iran General License (No. D-1) issued by the US Treasury.
However in practice, Amazon is over-complying with the sanctions, unfortunately crippling access for Iranian customers.
Due to the extremely high number of sanctions that apply to doing business with Iran, US based businesses simply do not take risks and would rather block off access completely than allow it on a case-by-case basis.
AWS falls within this category, as do virtually all other US companies e.g. PayPal (ZarinPal), Uber (Snapp!) & eBay (Digikala) to name a few.
If you're providing an international service, Cloudflare as a CDN will work. Popular Iranian websites like hamyarwp.com use Cloudflare and are still accessible within Iran.
If you're providing a local service, local Iranian hosting is the best way to move forward to guarantee access.
For object storage, perhaps try out Google Cloud Storage.

Related

AWS Services and Regions

I am very new with AWS and wanted to clear my concept on AWS services. I have read that that AWS has plenty of services that can also be accessed through API. A service is basically a software program. Then why are services not available in all regions. If my customers are from India, I can buy the EC2 instance from Asia but why should I choose service from USA East. Again, why does AWS provide regions for End Points. They could have installed all the services in all their regions - assuming that they are only software programs and not hardware resources.
Latency is not a big problem for you, I think, you can choose the best price options for your sources. If latency big a problem, you must choose the region/zone near your target market. Better understanding read this doc.
AWS Services operate on multiple levels and are all exposed through APIs.
Some services operate at a global scope (e.g. Identity and Access Management or Route53), most on a regional level (e.g. S3) and others somewhere between the region and availability zone (EC2, RDS, VPC...).
AWS uses the concept of a region for multiple purposes, one of the major drivers being fault isolation. Something breaking in Ireland (eu-west-1) shouldn't stop a service in Frankfurt (eu-central-1) from operating. Latency is another driver here. Since physics is involved, higher distances also increase the latency, which makes things like replication more tricky. Data residency and other compliance aspects are also a good reason to compartmentalize services.
Services being regional results in their endpoints being regional as well.
As to not every service being available in every region: Hardware availability is part of the reason, it doesn't make sense to have the more obscure hardware for niche use cases (think GroundStation, their satellite control service) in all regions. Aside from that, there are most likely some financial aspects involved as well, as global scale and complexity come at a cost and if demand isn't sufficient, it may not make sense to roll out a service everywhere.

Universal bucket in Google Cloud Platform for content, which would determine the user's location and serve content from the closest server?

I am a mobile application developer. I use Google Cloud bucket to store 10-second videos and photos that I use in the application. Thousands of users use the application every day, and I want to use a CDN to ensure that the content of the application is delivered to users with minimum delays and maximum speed.
At the moment, I have only found an opportunity to create a bucket within one region to choose from: the USA, Europe, and Asia. How to create a universal bucket in Google Cloud Platform for storing application content, which would determine the user's location and serve content from the server closest to the user?
Thank you!
You can take advantage of Cloud CDN (fully) when you use an HTTP(S) Load Balancer since it's a global resource, meaning that your bucket traffic would be forwarded to a GCP Point of Presence (PoP) near all the clients worldwide.
If interested, here's how to do it, but keep in mind that you would (potentially) need to change your DNS value from current bucket to the new Load Balancer, and HTTP --> HTTPS redirection is not implemented by default (you can achieve an automatic redirect with this, but this is something you would need to setup separatedly).
On an additional note, depending on how many files (similar) are requested from your bucket, you will be charged less than it would be considered CDN traffic and not full GCP traffic.
So in short, your bucket data would be on your selected region, but CDN would be all around the globe meaning less latency, less price and your backend not being overwhelmed (this doesn't apply here since you are using buckets, but would apply for backend GCE instances).

Aws serverless - cost of switching region

I am currently in the middle of development for aws serverless backend(cognito,lambda,api Gateway,dynamodb,s3)..
I find that I choose the wrong region before.
Question:
1.is there any difference when using different region in Aws development?
2.is the cost high when changing region in the middle of development(re-creating the db/lambda function/api gateway)
3.what is the proper approach to switch to another region with the same serverless setting/config I am using now?
1. Cost and latency will differ.
Some services in AWS have different costs in different regions. Some services are global (all regions) by default - for example S3. There are some useful charts on this blog post, including the following diagram on data transfer out cost differences by region:
If your customer is in region A and is requesting services in region B then the response will take ever so slightly longer. It’s not usually long enough to warrant concern. Though, using CloudFront between the service and customer will reduce the slow down - and in many cases make for a faster service so it’s worth doing even if customer and service are in the same region.
2. It depends
If you’re creating these services manually then you’d have to spend that time in the console for the new region again. Time is money, and you’ll maybe make a mistake in setup - you’re only human.
If you’re creating these services in code - using CloudFormation (or AWS CDK, serverless.com, terraform or the many other ways to do Infrastructure as Code) then it won’t cost anything. You would have a single command (maybe a few) which will reproduce your infrastructure in any region.
Then, you’ll need to migrate data. This is the unavoidable cost. If you’ve beeen running in region A for any time and then move to region B you will need to transfer the data. That’ll require a script to take the data out of dynamo and put it into the new one.
3. Use Infrastructure as Code and always be prepared for data migration
Have a look at AWS CDK. It allows you to define your services in either Java, Python, or JavaScript and has some nice tutorials. https://cdkworkshop.com/
As you code, build out your scripts to extract the data from dynamo. This is useful to have even if you don’t transfer tl a different region - maybe you want to run a copy in a staging/dev environment.
4. New services are not released in all regions at the same time
If you are using a brand new service or a new feature of an existing service, it might not yet be available in each region. Choose a region that supports all desired services and features. For example, in this Dec 2019 announcement by AWS about Inter-Region peering for Transit Gateway, it says this feature was released to "US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions" and the others would come soon.

How do I host a web application on AWS for worldwide use while following the Data Protection Laws for Germany & European Union?

We are building an application that stores email and phone numbers of the users. We understand that German data privacy laws require the database and web services to be hosted in Germany (i.e. a data centre in Germany). Our AWS EC2 instance is hosted in us-west. Do we need to host the application on the German data centre as well? We are using PHP5 with MySQL.
The German privacy law requires you to store and compute personal data in data centers located in the EU. So you are able to use eu-west-1 (Ireland) and eu-central-1 (Germany) on AWS at the moment.
But this tackles only one of the technical requirements you need to fulfill to be compliant with the German privacy law. There are other technical and non-technical requirements as well (e.g. an agreeement called Auftragsdatenverarbeitungsvereinbarung, not using global AWS services, ...).
The short answer is yes. Regional data must be stored within a data center located in that region. For instance, I have an application that is mainly hosted in US East, but I have customers in Ireland, Sydney, and Japan so I have deployments in those regions as well.
Your best bet is to either intimately familiarize yourself with the data laws in countries you are targeting, or hire a lawyer that can help you through it. You do not want to be on the receiving end of a lawsuit for mishandling customer data!

What is the expectation of privacy on an EC2 instance?

If I turn on a machine in EC2, what expectation of privacy do I have for my running processes, command line history, data stored on ephemeral disk, etc?
Can people at Amazon decide to take a look at what I'm running?
Could Amazon decide to do some profiling for the purposes of upselling?
Hi there! Looks like you're running Cassandra! Here's the optimal
tuning requirements for Cassandra on your m1.xlarge machine!
I can't seem to find anything in the docs...
This is the most applicable thing I found:
AWS only uses each a customer's content to provide the AWS services
selected by that customer and does not use customer content for any
other purposes. AWS treats all customer content the same and has no
insight into what type of content the customer chooses to store in
AWS. AWS simply makes available the compute, storage, database,
mobile, and network services selected by the customer. AWS does not
require access to customer content to provide its services.
http://aws.amazon.com/compliance/data-privacy-faq/
What you are asking about should be addressed in their "Data Privacy" policy (http://aws.amazon.com/agreement/) in their Customer Agreement page:
3.2 Data Privacy. We participate in the safe harbor programs described in the Privacy Policy. You may specify the AWS regions in which Your
Content will be stored and accessible by End Users. We will not move
Your Content from your selected AWS regions without notifying you,
unless required to comply with the law or requests of governmental
entities. You consent to our collection, use and disclosure of
information associated with the Service Offerings in accordance with
our Privacy Policy, and to the processing of Your Content in, and the
transfer of Your Content into, the AWS regions you select.
Here's a link to their "Privacy Policy":
http://aws.amazon.com/privacy/
So in essence, it's saying that you need to consent for them to gather information stored in your server. Now that's different from poking at the TCP ports on your machines from the outside. Amazon constantly runs port checking and traffic checking from the outside (it could be in their intranet too) to make sure you are complying with their customer agreement. For example, they can monitor that you are not hosting something illegal (through public content) or that you are not sending spam or robot traffic to hack into other servers.
Having said that, it's quite possible that they use some of these monitoring tools to check: ok this person has port so and so open. So he/she must be running this application and we can suggest something better for them.
Hope it helps.