AWS S3 credentials problem log message but S3Client.putObject() succeeds - amazon-web-services

On Windows 10 I'm using the AWS S3 Java SDK v2, Maven coordinates software.amazon.awssdk:s3:2.7.5 (from software.amazon.awssdk:bom). I have my credentials stored in ~user/.aws/credentials.
As per the docs, the I use an S3Client.putObject() to put an object in a bucket. I get two DEBUG level log messages from software.amazon.awssdk.auth.credentials.AwsCredentialsProviderChain:
software.amazon.awssdk.core.exception.SdkClientException: Unable to load credentials from system settings. Access key must be specified either via environment variable (AWS_ACCESS_KEY_ID) or system property (aws.accessKeyId).
However S3Client.putObject() succeeds.
Don't I have my credentials configured properly by putting them in ~user/.aws/credentials? Why is this error being logged? I realize that it is a DEBUG level message, but it clutters up the log, not only once but twice. Plus there was no problem; the credentials were configured correctly.
Is there some way to turn off this log message? Or is there some extra configuration step I'm not doing correctly?
Update: It appears that this debug message is only generated once for the creation of the S3 client, so it's not as intrusive as it appears. Still it's more cluttered than desired to log an exception for only a warning condition, even though it's only at a DEBUG level.

This is a common error message when you’re using ENV vars to provide your service with an AMI account (accessid-accesskey combination) as you’ve mentioned is DEBUG. I don’t usually use this log level on environments except for development.
Anyway you can configure different log levels for different namespaces, Generally I tend to use “Information” on third party libraries (except for Entity framework that always gets set to Error because of its verbosity).

Related

Warning in GCP Cloud SQL Database

Some time ago a warning showed up in loggings, as long as I can see it does not affect the projects workflow but it mess the logs.
The warning is:
[Warning] User 'mysql.session'#'localhost' was assigned access 0x8000 but was allowed to have only 0x0.
What is it about?
Thanks!
I’m gonna list the most common root causes for that error message you are getting:
Permissions: There are user permissions for the database, cloud functions, and the IAM roles. You can make it work by giving "All User" complete access. This is not secure, of course, but works.
Prod vs Dev (emulator): You can recall that access is less strict within the Google ecosystem, so running your functions on your emulator was a bad idea for troubleshooting. There are other ways (such as SSL or VM). As a last scenario, maybe you will have to deploy every time.
ORM: If you are using TypeORM for the first time, you will see that many of the tutorials are outdated.

Production Access controls for GoogleCloud using Stackdriver

How have people implemented Production Access Controls (i.e. logging and reporting on access to compute instances by services and humans over SSH). Our goal is to forward all user logon entries to our SIEM consistently across projects and ideally avoid having project specific Stackdriver sinks (and associated setup and maintenance).
We've tried the following:
Enabled auth log forwarding in Fluentd as only syslog is done by default
Enabled organization level sinks that send to a topic (to forward on to SIEM via HTTP subscriber) that include all children
Can see syslog/auth at the project level for non-Container OS images (i.e. Ubuntu)
Issues we're seeing:
- Limited documentation on filter format at org level (seems to differ from project level for things like logName). log_id function does appear to work
- Some log types appear at the org level (things like cloudapis activity) but syslog does not appear to get processed
- Container OS appears to not enable ssh/sudo forwarding by default in fluentd (or I haven't found which log type has this data). I do see this logged to journalctl on a test node
Does anyone have a consistent way to achieve this?
In case anyone else comes across this, we found the following:
It is possible to set up Stackdriver sinks at org level through CLI. Not visible through Cloud Console UI and also CLI does not allow you to list log types at org
Filters can be defined on the sinks in addition to logName but format can differ to project level filters
You need to enable auth log logging in fluentd which is platform specific (i.e. one process for google-fluentd on Ubuntu is different to stackdriver setup on Container OS)
SSHD for some reason does not log the initial log stating user and IP through syslog (and thus fluentd) and therefore is not visible to Stackdriver
Use or org sinks to topics is a child project with subscription to forward to your SIEM of choice, works well
Still trying to get logs of gcloud ssh commands
A way to approach this could be to by exporting your log sink to BigQuery. Note that sink setup to export BigQuery Logs for all projects under the Organization contains a parameter that is set to 'False', the field 'includeChildren' must be set to 'True'. Once set to true then logs from all the projects, folders, and billing accounts contained in the sink's parent resource are also available for export, if set to false then only the logs owned by the sink's parent resource are available for export. Then you must be able to filter the logs needed from BigQuery.
Another way to approach this will be to script it out by listing all the projects using command: gcloud projects list | tail -n +2 | awk -F" " '{print $1}' This can be made into an array that can be iterated over and the logs for each project can be retrieved using a similar command as the one in this doc.
Not sure if all this can help somehow to solve or workaround your question, hope so.

Automatically instrumenting Java application using AWS X-Ray

I am trying to achieve automatic instrumentation of all calls made by AWS SDKs for Java using X-Ray.
The X-Ray SDK for Java automatically instruments all AWS SDK clients when you include the AWS SDK Instrumentor submodule in your build dependencies.
(from the documentation)
I have added these to my POM
aws-xray-recorder-sdk-core
aws-xray-recorder-sdk-aws-sdk
aws-xray-recorder-sdk-spring
aws-xray-recorder-sdk-aws-sdk-instrumentor
and am using e.g. aws-java-sdk-ssm and aws-java-sdk-sqs.
I expected to only have to add the X-Ray packages to my POM and provide adequate IAM policies.
However, when I start my application I get exceptions such as these:
com.amazonaws.xray.exceptions.SegmentNotFoundException: Failed to begin subsegment named 'AWSSimpleSystemsManagement': segment cannot be found.
I tried wrapping the SSM call in a manual segment and so that worked but then immediately the next call from another AWS SDK throws a similar exception.
How do I achieve the automatic instrumentation mentioned in the documentation? Am I misunderstanding something?
It depends on how you make AWS SDK calls in your application. If you have added X-Ray servlet to your spring application per https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-filters.html then each time your application receives a request, the X-Ray servlet filter will open a segment and store it in the thread serving that request. Any AWS SDK calls you make as part of that request/response cycle will pick up that segment as the parent.
The error you got means the the X-Ray instrumentor tries to record the AWS API call to a subsegment but it cannot find a parent (which request this call belongs to).
Depending on your use case you might want to explicitly instrument certain AWS SDK clients and left others plain, if some of those clients are making calls in a background worker.

Determine whether I'm running on Cloudhub or Locally

I am building a Mulesoft/Anypoint app for deployment on Cloudhub, and for diagnostic purposes want to be able to determine (from within the app) whether it is running on Cloudhub or in Anypoint Studio on my local development machine:
On Cloudhub, I want to use the Cloudhub connector to create notifications for exceptional situations - but using that connector locally causes an exception.
On my local machine, I want to use very verbose logs with full dumping of the payload (#[message.payloadAs(java.lang.String)]) but want to use much more concise logging when on Cloudhub.
What's the best way to distinguish the current runtime? I can't figure out any automatic system properties that expose this information.
(I realize the I could set my own property called something like system.env=LOCAL and override it with system.env=CLOUDHUB for deployment, but I'm curious if the platform already provides this information in some way.)
As far as I can tell the best approach is to use properties. The specific name and values you use doesn't matter as long as you're consistent. Here's an example:
In your local dev environment, set the following property in mule-app.properties:
system.environment=DEV
When you deploy to Cloudhub, use the deployment tool to change that property to:
system.environment=CLOUDHUB
Then in your message processors, you can reference this property:
<logger
message="#['${system.environment}' == 'DEV' ? 'verbose log message' : 'concise log message']"
level="ERROR"
doc:name="Exception Logger"
/>

Missing SQL tab when using AD provider

I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!