Do I need to use EC2 with DynamoDB? - amazon-web-services

I haven't been able to find the answer to this question in the Amazon DynamoDB documentation, so my apologies for asking such a basic question here:
Can I access DynamoDB from my own web server, or do I need to use an EC2 instance?
Other than the obvious higher latency, are there any security or performance considerations when using my own server?

You can use Amazon DynamoDB without restrictions from about everywhere - a nice and helpful demonstration are the AWS Toolkits for Eclipse and Visual Studio for example, which allow you to create tables, insert and edit data, initiate table scans, and more straight from your local development environment (see the introductory post AWS Toolkits for Eclipse and Visual Studio Now Support DynamoDB).
Other than the obvious higher latency, are there any security or
performance considerations when using my own server?
Not really, other than facilitating SSL via the HTTPS endpoint, if your use case requires respective security.
In case you are not using it already, you should check out AWS Identity and Access Management (IAM) as well, which is highly recommended to securely control access to AWS services and resources for your users (i.e. your web server here), rather than simply using your main AWS account credentials.
Depending on your server location, you might want to select an appropriate lower latency endpoint eventually - the currently available ones are listed in Regions and Endpoints, section Amazon DynamoDB.

Related

Should I use API gateway as a proxy to S3?

I'm learning serverless architectures and currently reading this article on Martin Fowler's blog.
So I see this scheme and try to replace the abstract components with AWS solutions. I wonder if not using API gateway to control access to S3 a good idea (on the image the database no.2 is not using it). Martin speaks about Google Firebase and I'm not familiar with how it compares to S3.
https://martinfowler.com/articles/serverless/sps.svg
Is it a common strategy to expose S3 to client-side applications without configuring an API gateway as a proxy between them?
To answer your question - probably, yes.
But, you’ve made a mistake in selecting AWS services for the abstract in Martin’s blog. And you probably shouldn’t use S3 at all in the way you’re describing.
Instead of S3; you’ll want dynamoDB. You’ll also want to look at Cognito for auth.
Have a read of this after Martin’s article for how to apply what you’ve learned on AWS specific services https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/
Aws S3 is not a database, its an object storage service.
Making S3 bucket publicly accessible is possible but not recommended however, you can access its objects using the S3 API either via the CLI or the SDK.
Back to your question in the comments regarding whether consuming the API directly from the frontend (assuming you mean using JS) is for sure a bad practic since AWS highly recommend you to securly store your API credentials (keys), and as any AWS API call should include the API credentials (keys) provided by AWS for your IAM user, then obviously anyone using your web application can see these keys.
Hope this answered your question.

AWS : Python SDK, Do I need to configure Access key and Secure access key

I am trying to write an application in Python.
Through this application I want to create AWS Cognito users and provide services like user Sign-in, Forgot password, etc.
As I understand, boto3, is the standard Python library for accessing AWS APIs, from Python.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
This library needs storing of AWS credentials ( Access key and secure access keys) on the host machine.
Can this be avoided?
I want to distribute this Python application to my users.
I am checking, if I can avoid this configuration of AWS credentials on every user's host.
Is there any alternative option to boto3 library?
If you absolutely need to access internal AWS API's you need to log in to AWS. Access keys is one way, it's also possible to use aws-adfs command line tool to log in though active directory, but that requires your AWS/AD administrators to do some additional setup on their side.
I would suggest looking into writing a client-server / web applications that would be hosted within AWS and only expose relevant functionality to authenticated users.
If costs are an issue for a hosted application, look into lambdas, as there you pay only for cpu/memory time. In case of setting management app it will probably not even exceed free tier.

Does AWS provide any IDS/IPS in their services or Suggest some budget friendly 3rd party services

Looking for budget friendly IDS/IPS for my servers.The time frame would be once in three months. Are there any services i can use to run a test once in 3 months and pay for that particular period of time
There are services like AWS Shield and AWS WAF that you can use for IDS/IPS.
AWS Shiled
AWS Shield is a managed Distributed Denial of Service (DDoS)
protection service that safeguards applications running on AWS. AWS
Shield provides always-on detection and automatic inline mitigations
that minimize application downtime and latency, so there is no need to
engage AWS Support to benefit from DDoS protection. There are two
tiers of AWS Shield - Standard and Advanced.
AWS WAF
AWS WAF is a web application firewall that helps protect your web
applications or APIs against common web exploits that may affect
availability, compromise security, or consume excessive resources. AWS
WAF gives you control over how traffic reaches your applications by
enabling you to create security rules that block common attack
patterns, such as SQL injection or cross-site scripting, and rules
that filter out specific traffic patterns you define. You can get
started quickly using Managed Rules for AWS WAF, a pre-configured set
of rules managed by AWS or AWS Marketplace Sellers. The Managed Rules
for WAF address issues like the OWASP Top 10 security risks. These
rules are regularly updated as new issues emerge. AWS WAF includes a
full-featured API that you can use to automate the creation,
deployment, and maintenance of security rules.
You can also buy third-party software that you can run on EC2 instances for IDS/IPS.
Intrusion Detection & Prevention Systems
EC2 Instance IDS/IPS solutions offer key features to help protect your
EC2 instances. This includes alerting administrators of malicious
activity and policy violations, as well as identifying and taking
action against attacks. You can use AWS services and third party
IDS/IPS solutions offered in AWS Marketplace to stay one step ahead of
potential attackers.

AWS assume iam roles vs gcp's json files with private keys

One thing I dislike about Google Cloud Platform (GCP) is its less baked-in security model around roles/service accounts.
Running locally on my laptop, I need to use the service account's key specified in a JSON file. In AWS, I can just assume a role I have been granted access to assume (without needing to carry around a private key). Is there an analogue to this with GCP?
I am going to try and answer this. I have the AWS Security Specialty (8 AWS certifications) and I know AWS very well. I have been investing a lot of time this year mastering Google Cloud with a focus on authorization and security. I am also an MVP Security for Alibaba Cloud.
AWS has a focus on security and security features that I both admire and appreciate. However, unless you really spend the time to understand all the little details, it is easy to implement poor/broken security in AWS. I can also say the same about Google security. Google has excellent security built into Google Cloud Platform. Google just does it differently and also requires a lot of time to understand all the little features / details.
In AWS, you cannot just assume a role. You need an AWS Access Key first or be authenticated via a service role. Then you can call STS to assume a role. Both AWS and Google make this easy with AWS Access Keys / Google Service Accounts. Whereas AWS uses roles, Google uses roles/scopes. The end result is good in either platform.
Google authentication is based upon OAuth 2.0. AWS authentication is based upon Access Key / Secret Key. Both have their strengths and weaknesses. Both can be either easy to implement (if you understand them well) or a pain to get correct.
The major cloud providers (AWS, Azure, Alibaba, Google, IBM) are moving very fast with a constant stream of new features and services. Each one has strengths and weaknesses. Today, there is no platform that offers all the features of the others. AWS today is ahead both in features and market share. Google has a vast number of services that outnumber AWS and I don't know why this is overlooked. The other platforms are catching up quickly and today, you can implement enterprise class solutions and security with any of the cloud platforms.
Today, we would not choose only Microsoft or only Open Source for our application and server infrastructure. In 2019, we will not be chosing only AWS or only Google, etc. for our cloud infrastructure. We will mix and match the best services from each platform for our needs.
As described in the Getting Started with Authentication [1] page, for service accounts it is needed the key file in order to authenticate.
From [2]: You can authenticate to a Google Cloud Platform (GCP) API using service accounts or user accounts, and for APIs that don't require authentication, you can use API keys.
Service and user accounts needs the key file to authenticate. Taking this information into account, there is no manner to locally authenticate without using a key file.
Links:
[1] https://cloud.google.com/docs/authentication/getting-started
[2] https://cloud.google.com/docs/authentication/

Amazon Web Services: RDS + Web Service vs DynamoDB API for Mobile App. What's Easiest/Best for Security?

I'm building a mobile app that needs a backend that I've chosen to host using Amazon Web Services.
Their mobile SDKs provide APIs to work directly with the DynamoDB (making my app a thick client), including user authentication/authorization with their IAM service (which is what I'm going to use to track users). This makes it easy to say "user X wants their information. Here's their temporary access key. Oh, here's the information you requested."
However, if I used RDS as a backend database, I'd have to create web services (in PHP or Java/etc) that my app can talk to. Then I'd also have to implement the authentication/authorization myself within my web service (which I feel could get very messy). I'd also have to host the web service on an EC2 instance, as well as having the RDS instance. So my costs would increase.
The latter seems like it would be a lot of work, something which I could avoid by using DynamoDB (and its API) as my backend.
Am I correct in my reasoning here? Or is there an easy way to authenticate/authorize a PHP web service with an AWS RDS database?
I ask because I've only ever worked with relational databases before, so there would be a learning curve to get the NoSQL db running. Though hypothetically my plan is to eventually switch to a NoSQL db at some point in the future anyways due to my apps increasing demands.
Side note: I already have my database designed in MySQL.
There is no solution to use IAM directly with RDS because of the unavailability of fine-grained access control over RDS tables. Moreover IAM policies cannot be enforced dynamically (i.e. with an Identity Pool).
RDS is an unmanaged service, so it is not provided as a SaaS endpoint. DynamoDB is a REST service presented as a distributed key-value store and exposes endpoints to clients (AWS SDK is just a wrapper around them).
DynamoDB is born as a distributed service and can guarantee fine-grained control over data access, thus allowing concurrent access.