Prevent SQL injection when querying access logs using Athena - amazon-web-services

AWS Athena allows you to query Cloudfront access logs that are stored in S3. These access logs include URIs that originate from web clients.
If a bad actor included malicious data in this URI how could one make sure that Athena did not get infiltrated by SQL injected URI string? Does Athena or Cloudfront provide any default protections here?

No. Only AWS WAF provides protection against SQL injections.
Please note that it is not the job of the query engine to prevent SQL injections -- it is the job of whatever generates the SQL before sending it to the database.

Related

Can I use temporary AWS IAM credentials with the BigQuery Data Transfer Service?

Currently, we use AWS IAM User permanent credentials to transfer customers' data from our company's internal AWS S3 buckets to customers' Google BigQuery tables following BigQuery Data Transfer Service documentation.
Using permanent credentials possesses security risks related to the data stored in AWS S3.
We would like to use AWS IAM Role temporary credentials, which require the support of a session token on the BiqQuery side to get authorized on the AWS side.
Is there a way that the BigQuery Data Transfer Servce can use AWS IAM roles or temporary credentials to authorise against AWS and transfer data?
We considered Omni framework (https://cloud.google.com/bigquery/docs/omni-aws-cross-cloud-transfer) to transfer data from S3 to BQ, however, we faced several concerns/limitations:
Omni framework targets data analysis use-case rather than data transfer from external services. This concerns us that the design of Omni framework may have drawbacks in relation to data transfer at high scale
Omni framework currently supports only AWS-US-EAST-1 region (we require support at least in AWS-US-WEST-2 and AWS-EU-CENTRAL-1 and corresponding Google regions). This is not backward compatible with current customers' setup to transfer data from internal S3 to customers' BQ.
Our current customers will need to signup for Omni service to properly migrate from the current transfer solution we use
We considered a workaround with exporting data from S3 through staging in GCS (i.e. S3 -> GCS -> BQ), but this will also require a lot of effort from both customers and our company's sides to migrate to the new solution.
Is there a way that the BigQuery Data Transfer Servce can use AWS IAM roles or temporary credentials to authorise against AWS and transfer data?
No unfortunately.
The official Google BigQuery Data Transfer Service only mentions AWS access keys all throughout the documentation:
The access key ID and secret access key are used to access the Amazon S3 data on your behalf. As a best practice, create a unique access key ID and secret access key specifically for Amazon S3 transfers to give minimal access to the BigQuery Data Transfer Service. For information on managing your access keys, see the AWS general reference documentation.
The irony of the Google documentation is that while it refers to best practices and links to the official AWS docs, it actually doesn't endorse best practices and ignores what AWS mention:
We recommend that you use temporary access keys over long term access keys, as mentioned in the previous section.
Important
Unless there is no other option, we strongly recommend that you don't create long-term access keys for your (root) user. If a malicious user gains access to your (root) user access keys, they can completely take over your account.
You have a few options:
hook into both sides manually (i.e. link up various SDKs and/or APIs)
find an alternative BigQuery-compatible service, which does as such
accept the risk of long-term access keys.
In conclusion, Google is at fault here of not following security best practices and you - as a consumer - will have to bear the risk.

Can SnowFlake be used as a source endpoint in Data Migration Service of AWS?

I am trying to use AWS DMS Database Migration Service along with Snowflake as a source database. Is there any way I can achieve this ?
All I could see options for IBMDB2, MySQL, SQL Server, Amazon Aurora, Oracle, SAP Sybase etc. But not for Snowflake.
Can ODBC string for SnowFlake be put in as a source endpoint ? Or any workaround
Because DMS doesn't support Snowflake as destination yet so I think you could use S3 as target then use
Snowflake bulkload to load data from S3 https://docs.snowflake.com/en/user-guide/data-load-s3-create-stage.html
Snowpipe to do continuous loading.

Creating Maintenance Pages for Dynamo DB tables using AWS

What are your suggestions for how I should creating maintenance/administration pages that allow me to add/modify/report on entries in my DynamoDB tables on AWS?
What I would like to do is to create web pages that are hosted in AWS S3 but allow me to script Dynamo DB access.
I'm trying to avoid setting up something like a LAMP stack on another host.
Without any backend code... JavaScript (or a derivative). Try the AWS SDK for JavaScript. Or, you can use API Gateway, Lambda and JavaScript. Either work with S3-only hosting. Lambda could hide some implementation details behind an API and you don’t need to worry about managing servers. More moving parts is the trade-off.

Enabling bulk admin privilege in amazon rds?

I wanted to setup an RDS instance store data for reporting. I have scrips that run different rest calls against certain sites that require bulk admin privilege on the back end because they dump their rest call data into a csv and then do a bulk csv insert into Sql Server SE. In my local environment setting up a user for my scripts to use with bulk admin privileges was easy. However, I couldn't seem to figure out how to do it in RDS. I opened a ticket with Amazon and they suggested writing a policy for it. So I figured I would ask here if this is possible and possible alternatives? If bulk/system admin privileges are out of the question in RDS I guess I will just have to use an AWS EC2 instance with Sql Server set up on it.
Bulk insert is not possible with RDS. The data_file parameter of the BULK INSERT command must refer to a file accessible by the machine running SQL Server.
Also, RDS does not support the bulkadmin server role.
Supported SQL Server Roles and Permissions
Importing and Exporting SQL Server Data

S3 GET without Signing (Cleartext possible?)

Is there a way to GET a object from Amazon S3 by sending the cleartext Accesskey:Secret instead of signing/HMAC?
Not unless the ACL is set for anonymous read access.
Another option would be to use Amazon S3 query string authentication with your URL. Most Amazon S3 clients can generate that.
You can also use an S3fm, an online file manager for Amazon S3. Just select Web URL in your context menu and generate an URL with an expiration date.