No matter what I try it seems my web service cannot access my .aws/credentials file.
I always get this error:
System.UnauthorizedAccessException: Access to the path '{PATH}' is denied.
Here is what I have tried:
Move the path from the default directory to the website root
Change the website app pool to run as my user account
Given Everyone full control of the folder and the file
Verify that when I put the same key and secret into the web.config the call works
Tried removing the region from the config
Tried removing the path from the config
Here is my config (note if I don't provide the path, even when in the default location, it says no credentials file was found)
<add key="AWSProfileName" value="default" />
<add key="AWSRegion" value="us-east-1"/>
<add key="AWSProfilesLocation" value="{PATH}" />
In the AWS toolkit I have a `default' profile setup as well that has rights but that does not help this work.
I have even tried the legancy format called out in the AWS docs. What am I missing? It seems I have followed everything AWS calls out in their docs.
I am using Castle Windsor DI so could that be getting in the way?
container.Register(
Component.For<IAmazonDynamoDB>()
.ImplementedBy<AmazonDynamoDBClient>()
.DependsOn(Dependency.OnValue<RegionEndpoint>(RegionEndpoint.USEast1))
.LifestylePerWebRequest());
container.Register(
Component.For<IDynamoDBContext>()
.ImplementedBy<DynamoDBContext>()
.DependsOn(Dependency.OnComponent<IAmazonDynamoDB, AmazonDynamoDBClient>())
.DependsOn(Dependency.OnValue<DynamoDBContextConfig>(
new DynamoDBContextConfig
{
TableNamePrefix = configurationManager.GetRequiredAppSetting<string>(Constants.Web.AppSettings.AwsDynamoDbPrefix),
Conversion = DynamoDBEntryConversion.V2
}))
.LifestylePerWebRequest());
The problem that you have is that the path ~\.aws\credentials is only defined when logged in as a user.
A Windows services such as IIS is not logged in as the user that created the credentials file. Therefore the the path is not accessible to the Windows service. Actually the service does not know what user to look into. For example if your user name is john, the path would be c:\users\john\.aws\credentials. The Windows service does not know about your identity.
Note: I believe - but I am not 100% sure - is that a windows service will look in c:\.aws for credentials. I have used this path in the past but I cannot find Amazon reference documentation to support this. I no longer store credentials on my EC2 instances, so I am out of touch on the location c:\.aws.
You have a number of choices. Create the credentials as usual. Then create a directory outside of your IIS installation and setup such as c:\.aws. Copy ~\.aws to c:\.aws. Then specify the full path in your programs.
A much better and more secure method, if you are running your services on AWS, is to use IAM Role. Create a role with the desired permissions and attach the role to your EC2 instance. All AWS SDKs and Tools know how to find the credentials from AWS Metadata.
There are many more methods such as EC2 Parameter Store. Storing credentials on your instances or inside your program is not a good idea.
[Edit after thinking more about the error message]
You may have an issue where IIS does not have access rights to the location where the credentials are stored.
Open Windows Explorer and locate the folder for your credentials file. Right click this folder, select Properties and click the Security tab. From here, choose Edit then Add. The following users must be added and given at least READ permissions: IUSR & IIS_IUSRS. You may need to add "LIST FOLDER CONTENTS".
Related
I'm using Google cloud build for CI/CD for my django app, and one requirement I have is to set my GOOGLE_APPLICATION_CREDENTIALS so I can perform authenticated actions in my Docker build. For example, I need to run RUN python manage.py collectstatic --noinput which requires access to my Google cloud storage buckets.
I've generated the credentials and it works well when simply including it in my (currently private) repo as a .json file, so it gets pulled into my Docker container with the COPY . . command and setting the env variable with ENV GOOGLE_APPLICATION_CREDENTIALS=credentials.json. Ultimately, I want to grab the credential value from secret manager and create the credentials file during the build stage, so I can completely remove the credentials from the repo. I tried doing this with editing cloudbuild.yaml (referencing this doc) with various implementations of the availableSecrets config, $$SECRET syntax, and build-args in the docker build command and trying to access in Dockerfile with
ARG GOOGLE_BUILD_CREDS
RUN echo "$GOOGLE_BUILD_CREDS" >> credentials.json
ENV GOOGLE_APPLICATION_CREDENTIALS=credentials.json
with no success.
If someone could advise me how to implement this in my cloudbuild.yaml and Dockerfile if its possible, or if there's another better solution altogether, would be much appreciated.
This is the relevant part of my cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
availableSecrets:
secretManager:
- versionName: projects/PROJECT_ID/secrets/CREDENTIALS/versions/latest
env: 'CREDENTIALS'
If your container will run on Cloud Run, it's super easy: Remove the service account key file (and roughly, in most use cases, you never ever need it).
Keep in mind that a service account key file is a secret with a private key. And if you put it in your container, you simply store it in plain text. So bad for a secret!! (with dive, you can explore your container content, and steal the secret if you have access to the container directly)
But, I'm sure you know that because you want to store the secret in a secret manager. Now a question? How do you access a secret manager? Do you need a service account key file to be authenticated to access it?
In fact not.
The solution is to use ADC (Application default credentials). With the client libraries, use the get default credential method to let the library determine automatically the platform and the credential to use
On Cloud Run (as any other Google Cloud services), you have a metadata server that allows client libraries to get credentials information from the runtime service account.
On your local environment, you have 2 options:
Use your own credential. For that run the command gcloud auth application-default login. It's your own credential and permissions, not exactly the same as the Cloud Run runtime environment
Impersonate the Cloud Run runtime service account and act as itself to run your container/code locally. For that run the command gcloud auth application-default login --impersonate-service-account=<service account email>, Be sure to have the role service account token creator on the service account.
And then, run your app locally, and let the ADC use the credentials
I think I've worked out a fix. To solve the error I mentioned in my reply to #guillaume-blaquiere, I updated my build args in cloudbuild.yaml to include --network=cloudbuild, allowing me access to the correct service account credentials (credit to this answer).
The next issue I faced is with the django-storages library, returning this exception
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'google.auth.compute_engine.credentials.Credentials'> just contains a token. see https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account for more details.
I then came across this suggestion to add the setting GS_QUERYSTRING_AUTH = False to my django config and this seems to do the trick. My only conern is the documentation here does not go into too much detail on impacts or risks of disabling this (the bucket is public-read as it recommends). It seems to be working as intended however. So I will go with this configuration unless a better solution is put forward.
I'm trying to use this Function.
The User I'm trying to impersonate is not in the same domain.
I can mount the Server using the credentials just fine.
But whenever I call LogonUser
bLogonSucc = ::LogonUser(sUserName
, sDomain
, sUserPW
, LOGON32_LOGON_INTERACTIVE
, LOGON32_PROVIDER_DEFAULT
, &hToken);
I get the error 1935:
ERROR_AUTHENTICATION_FIREWALL_FAILED
1935 (0x78F)
The computer you are signing into is protected by an authentication firewall. The specified account is not allowed to authenticate to the computer.
My goal is to open a File on a Server, where the User is used as Login to said destination and open the File.
If I use LOGON32_LOGON_NEW_CREDENTIALS as Parameter, the LogonUser Function & Impersonate works, but somehow still doesn't work later on in the code.
Can't seem to find a solution for this.
Any Ideas on how to solve this?
The Firewall should be setup correctly.
This error occurs because the user or group, has been granted the correct rights to access the share. But the share is in another domain, and even though that domain trusts the one the user is coming from, the trust was set up with ‘selective authentication’.
You can try this.
Go to the domain that’s providing the share, log into a domain controller
Open 'Control Panel\System and Security\Administrative Tools'
Open ‘Active Directory Users and Computers’
View
Advanced Features
Locate the COMPUTER you are trying to authenticate
Properties
Security
Add in the user (or group) that requires access
Grant the “Allowed to authenticate” right
Apply and OK
I'm trying to get the list of the intents in my Dialogflow agent using Dialogflow's V2 APIs but have been getting the following error:
PermissionDenied: 403 IAM permission 'dialogflow.intents.list' on 'projects/xxxx/agent' denied.
I adopted the following steps:
I created a new agent(with V2 APIs enabled) and a new service account for it.
I downloaded the JSON key and set my GOOGLE_APPLICATION_CREDENTIALS variable to its path.
Following is my code:
import dialogflow_v2 as dialogflow
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/home/user/folder/service-account-key.json"
client=dialogflow.IntentsClient()
parent = client.project_agent_path('[PROJECT_ID]')
for element in client.list_intents(parent):
pass
I have made various agents and service accounts and even changed the role from Admin to Client but can't figure out any solution. I tried the following solution but didnt' work
Tried Solution: DialogFlow PermissionDenied: 403 IAM permission 'dialogflow.sessions.detectIntent'
There is no need for creating a new Agent. You can edit the existing agents IAM.
In Dialogflow's console, go to settings ⚙ > under the general tab, you'll see the project ID section with a Google Cloud link to open the Google Cloud console > Open Google Cloud.
In google cloud, go to IAM Admin > IAM under tab Members. Find the name of your agents and then click on edit.
Give admin permissions to the agent to give permissions to list intent.
The problem lies in the IAM section of GCP. Probably you are making a POST request with a role that does not have the necessary authorizations.
Look into your key.json file that contains the field "client_email"
Proceed to the IAM page and set the relevant role with that email to
a role that has posting capabilities. (e.g. Admin)
This solved my problem.
In Dialogflow's console, go to settings ⚙ > under the general tab, you'll see the project ID section with a Google Cloud link to open the Google Cloud console > Open Google Cloud.
(Optional) In the Cloud console, go to the menu icon > APIs & Services > Library. Select any APIs (if any) > Enable.
In Cloud Console > under the menu icon ☰ > APIs & Services > Credentials > Create Credentials > Service Account Key.
Under Create service account key, select New Service Account from the dropdown and enter a project name and for role choose Owner > Create.
JSON private key file will be downloaded to your local machine that you will need.
For Javascript:
In the index.js file you can do service account auth with JWT:
const serviceAccount = {}; // Starts with {"type": "service_account",...
// Set up Google Calendar Service account credentials
const serviceAccountAuth = new google.auth.JWT({
email: serviceAccount.client_email,
key: serviceAccount.private_key,
scopes: 'https://www.googleapis.com/auth/xxxxxxx'
});
For Python:
There's a Google Auth Python Library available via pip install google-auth and you can check out more here.
When you create the intentClient, use following:
key_file_path = "/home/user/folder/service-account-key.json";
client=dialogflow.IntentsClient({
keyFilename: key_file_path
})
Intents list
This error message is usually thrown when the application is not being authenticated correctly due to several reasons such as missing files, invalid credential paths, incorrect environment variables assignations, among other causes. Keep in mind that when you set an environment variable value in a session, it is reset every time the session is dropped.
Based on this, I recommend you to validate that the credential file and file path are being correctly assigned, as well as follow the Obtaining and providing service account credentials manually guide, in order to explicitly specify your service account file directly into your code; In this way, you will be able to set it permanently and verify if you are passing the service credentials correctly.
Passing the path to the service account key in code example:
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json('service_account.json')
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
Try also to create project in DialogFlow Console
https://dialogflow.cloud.google.com/
You need to create the following as environment variable
googleProjectID: "",
dialogFlowSessionID: "anything",
dialogFlowSessionLanguageCode: "en-US",
googleClientEmail: "",
googlePrivateKey:
I think you might have missed the Enable the API section in the documentation setup.
Here is that link:
https://cloud.google.com/dialogflow/cx/docs/quick/setup#api
After clicking the link, select the chatbot project you created and fill the necessary instructions given there.
The permissions that I have given for that project are Owner, and editor.
After this, try the code in this link:
https://cloud.google.com/dialogflow/es/docs/quick/api#detect_intent
You should get a response from your chatbot
Hope this helps!
I am using CarrierWaveDirect to upload a high resolution images to s3. I then use that image to process multiple versions which are made public through Cloudfront urls.
The uploaded high res files need to remain private to anonymous users, but the web application needs to access the private file in order to do the processing for other versions.
I am currently setting all uploaded files to private in the CarrierWave initializer via
config.fog_public = false
I have an IAM policy for the web application that allows full admin access. I also have set the ACCESSKEY AND SECRETKEY in the app for that IAM user. Given these two criteria, I would think that the web app could access the private file and continue with processing, but it is denied access to the private file.
*When I log into the user account associated with the web app, I am able to access the private file because a token is added on to the URL.
I can't figure out why the app cannot access the private file given the ACCESSKEY AND SECRRETKEY
I was having a hard time getting to your problem. I am quite certain your question is not
unable to access private s3 file even though IAM policy grants access
but rather
how to handcraft a presigned URL for GETting a private file on S3
The gist shows you're trying to create the presigned URL for GET yourself. While this is perfectly fine, it's also very error-prone.
Please verify that what you're trying to do is working at all, using the AWS SDK for Ruby (I only post code known to work with version 1 here but if you aren't held back by legacy code, start with version 2):
s3 = AWS::S3.new
bucket = s3.buckets["your-bucket-name"]
obj = bucket.objects["your-object-path"]
obj.url_for(:read, expires: 10*60) # generate a URL that expires in 10 minutes
See the docs for AWS::S3::S3Object#url_for and Aws::S3::Object#presigned_url for details.
You may need to read up on passing args to AWS::S3.new here (for credentials, regions and so).
I'd advise you take the following steps:
Make it work locally using the access_key_id and secret_access_key
Make it work in your worker
If it works, you can compare the query string the SDK returned with the one you handcrafted yourself. Maybe you can spot an error.
But in any case, I suggest you use higher-level SDKs to do things like that for you.
If this doesn't get you anywhere, please post a comment with your new findings.
I use WSO2 5.0.0 as IdP and the user store is an Active Directory (AD). User and Roles are listed in WSO2 Management console and I'am also being able to login in WSO2 with User/PW stored in AD.
Therefore everything works fine.
The only problem I have is that if I request roles of users (e.g. over RemoteUserStoreManagement- WebService with method getUserClaimValues) than I get the WSO2 roles and not the Active Directory Roles assigned to the users in the AD. Also only the WSO2- Roles are mapped to users in WSO2.
Actually I have only basic knowledge in AD (I haven't adjust the current connection between WSO2 and AD) - therefore I have no idea where I should have a look at in order to resolve this problem.
Has anybody a hint concerning this issue (user-mgt.xml or WSO2 console or ...)
Thanks a lot for help!
So, you need to retrieve the roles of the user? According what you have mentioned, Please do following to resolve this issue.
Please add following attributes under user store manager configuration in user-mgt.xml file, if there are not with the configuration.
<Property name="BackLinksEnabled">true</Property>
<Property name="MemberOfAttribute">memberOf</Property>
Please restart the server and verify.
Please enable the debug logs in the user kernel and verify where is the issue has been generated.
To enable logs,
Locate log4j.properties file which can be found at /repository/conf directory.
Add following entry in to the file
log4j.logger.org.wso2.carbon.identity.sso.saml=DEBUG
Restart the server and try to invoke the server. You would see LDAP related logs where it would help to identify the issue.