Is it possible to use AWS DMS without Oracle Log Miner or Binary Reader? - amazon-web-services

I'm using AWS DMS to migrate data from an Oracle Source Endpoint into an S3 Target Endpoint. I do not have administrative control over the Source Endpoint, and those who do are reluctant to provide GRANT privileges to read V_$LOG etc.
Is there a way to use DMS to move data from Source to Target without having these kinds of GRANTs?
I tried disabling CDC options altogether by setting UseBFile to false and UseLogminerReader to false. However, at least one is reenabled due to the false on the other. The result is
Test Endpoint failed: Application-Status: 1020912, Application-Message: No permissions to access V$LOG Endpoint initialization failed.

Related

Prevent SQL injection when querying access logs using Athena

AWS Athena allows you to query Cloudfront access logs that are stored in S3. These access logs include URIs that originate from web clients.
If a bad actor included malicious data in this URI how could one make sure that Athena did not get infiltrated by SQL injected URI string? Does Athena or Cloudfront provide any default protections here?
No. Only AWS WAF provides protection against SQL injections.
Please note that it is not the job of the query engine to prevent SQL injections -- it is the job of whatever generates the SQL before sending it to the database.

Unable to read from reader endpoint on AWS neptune

My application as well as usage from awscurl fails to properly hit the reader endpoint of my neptune cluster. I have spawned a single read replica in addition to the primary. I try to hit the status endpoint with it and it fails (whereas the primary works)
awscurl https://endpoint:8182/status --service neptune-db -v
I use the above between primary (works) reader (doesn't work). Why would this be?
Adding an answer to summarize the discussion in the comments.
As a general rule, connection failures such as this one are caused by one or more networking or security settings. Things to check include:
The calling application has the appropriate role and policies in place to allow access.
The calling application has access to the VPC Neptune is running in.
The request is correctly signed in cases where IAM authentication is enabled.
Security groups have the required ports open
The Neptune service is not blocked by Service Control Policies(SCP) if AWS Organizations is being used.
Subnets are accessible as needed.
Transit Gateways are working as expected. As noted in the comments the Route Analyzer can be used to help diagnose issues.

Google cloud function is not able to access data from GCS bucket

I have one cloud function which is triggering the dataflow. For this process it should get dataflow template which kept in a gcs bucket.
Using a default service account (linked to cloud function) with Editor Role I am able to fetch this file.
But using a custom service account with below roles it showing 403 status.
Cloud Build Service Account
Cloud Build Service Agent
Cloud Functions Service Agent
Container Registry Service Agent
Dataflow Developer
Storage Object Admin
The error I am getting is
2020-10-21 11:14:20.820 WARN 1 --- [p2094777811-167] .a.b.s.e.g.u.RetryHttpRequestInitializer : Request failed with code 403, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://dataflow.googleapis.com/v1b3/projects/<project id>/locations/australia-southeast1/templates:launch?gcsPath=gs://<path>/templates/i-template.
Do I missed any roles? Please help
The error message means that you do not have the required permissions to execute the operation:
RetryHttpRequestInitializer : Request failed with code 403, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://dataflow.googleapis.com/v1b3/projects//locations/australia-southeast1/templates:launch?gcsPath=gs:///templates/i-template.
You mentioned that by using the Editor role, you were able to execute the operation without issues. This is because, as an Editor, you are able to accomplish many tasks on a vast majority of resources: viewer permissions, plus permissions for actions that modify state, such as changing existing resources.
You can refer to this documentation for more information about Basic role definitions.
Now, you can narrow down the permission scope to a minimum set of permissions which will allow you to have more control over each resource. For this, I would recommend that you add the Cloud Functions Developer and Dataflow Admin roles.
Being a CF developer, you will have full access to functions, operations and locations. Then, the Dataflow Admin role encompasses all the necessary permissions for creating and managing dataflow jobs and also includes some Cloud Storage permissions, such as storage.buckets.get and create, get, and list objects.
Lastly, please make sure that you have the necessary permissions for the trigger sources, i.e. Cloud Storage, and using Storage Admin should be enough.
Please note that you can always double check your roles along with its permissions by looking at the predefined roles tables for each Google Cloud resource, in case you need to narrow it down further.
If you want to give Storage read access (asuming that you are not using fine grained permissions) to this service account you are missing at least the Storage Object Viewer role for this account.

Spring Cloud + RDS (spring-cloud-starter-aws-jdbc) failing to load credentials on startup despite being present

I'm using spring-cloud-starter-aws-jdbc to connect to an RDS instance. I initially went the traditional spring.datasource route, but I needed to make use of read-replicas and wanted to configure this without introducing any weird code.
The error I'm getting is:
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#47acd13b: Failed to connect to service endpoint: , com.amazonaws.auth.profile.ProfileCredentialsProvider#6f8e9d06: profile file cannot be null]
| at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136)
Initially I tried adding an AmazonRDS bean to my configuration and provided the credentials directly, but that wasn't good enough. I set a breakpoint inside getCredentials() and can see it being called twice: the first time there are 5 credential providers, one of which contains the AWS credentials I'm passing in via environment variables.
The second time, there are only two providers, neither of which contain my credentials, and so the app crashes. Has anyone ever used this library before and been successful? I can't figure out why it's fetching the credentials twice when I've already provided the RDS client and even tried providing the credentials with a bean.

AWS BOTO : No handler After configuration

I'm deploying my Django application on ec2 on AWS.
I did configuration setting up ~/.boto and finally succeed in 'python manage.py collectstatic'.
If there is an error, then error is caused! (I know because I solved it by setting up ~/.boto configuration file!).
But after configuration , when I query my image file at S3 mapped to my imageField model, it shows the error message below:
No handler was ready to authenticate. 1 handlers were checked.
['HmacAuthV1Handler'] Check your credentials
I think I made it authentication, but why is this message occuring?
Using a role is absolutely the correct way to handle authentication in EC2 to AWS. Putting long term credentials on the machine is a disgusting alternative. Assuming you're using a standard SDK, ( and boto absolutely is), the SDK will automatically use the role's temporary credentials to authenticate, so all you have to do is launch the instance with an "instance profile" specifying a role, and you get secure credentials delivery for free.
You'll have to replace your server to do so but_being able to recreate servers is fundamental to success in aws anyway. The sooner you start thinking that way, the better the cloud will work for you.
Once the role is attached to the instance, the policies defining the role's permission can be modified dynamically. So you don't need to get the permissions sorted out before creating the role.
At the high level, you specify a role at instance creation time. The EC2 console can facilitate the process of creating a role, allowing the EC2 service to access it, and specifying at instance creation time.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html provides detailed instructions.