I have an AWS credentials file working fine locally for sending SES emails in Windows 10.
c:/user/myusername/.aws/credentials
(with [default] profile info in).
The documentation + many articles I've found - says 'myusername' should be replaced with the username in use on the machine, however on the Windows Server 2012 I've tried placing the file in these locations - none work:
c:/user/Administrator/.aws/credentials
c:/user/IIS_IUSRS/.aws/credentials
c:/user/nameofsite.com/.aws/credentials (w3wp lists this as username)
I get 'Unable to find credentials' error.
Which folder do I need to put these credentials in on the server to get this working?
(I can't get appsettings.json ProfilesLocation working in this .net core MVC 2 app - something I can get to work in MVC 5 apps - so getting this working in the above way is required).
Thanks.
According to the link
https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html
Cmdlets in AWS Tools for PowerShell Core accept AWS access and secret keys or the names of credential profiles when they run, similarly to the AWS Tools for Windows PowerShell. When they run on Windows, both modules have access to the AWS SDK for .NET credential store file (stored in the per-user AppData\Local\AWSToolkit\RegisteredAccounts.json file). This file stores your keys in encrypted format, and cannot be used on a different computer. It is the first file that the AWS Tools for PowerShell searches for a credential profile, and is also the file where the AWS Tools for PowerShell stores credential profiles. For more information about the AWS SDK for .NET credential store file, see Configuring AWS Credentials. The AWS Tools for PowerShell module does not currently support writing credentials to other files or locations.
For me, I actually store AWS credentials in the applicationhost.config file in the server. For dev machines I store them on the user store and access them as a normal config file based properties.
Related
I am trying to set credentials for dynamodb following the instruction here: https://aws.amazon.com/getting-started/hands-on/real-time-leaderboard-amazon-aurora-serverless-elasticache/?trk=gs_card.
Now, I want to set a credential inside const client = new DynamoDBClient({ credential here }) by following https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/awsauthinputconfig.html#signer. I wasn't sure of the format of the credential inside the new DynamoDBClient method so I tried looking for the credentials code. The documentation says credential is defined in packages/middleware-signing/dist/types/configurations.d.ts:6, but I cannot find that at all.
How would I set the configuration and also know what they mean that credentials is defined in 'packages/middleware-signing/dist/types/configurations.d.ts:6'?
All AWS SDKs have their own Developer Guides. The AWS SDK for JavaScript is no different. To learn how to work with the AWS SDK for JavaScript, refer to the Developer Guide:
AWS SDK for JavaScript v3 Developer Guide
This guide contains all the information you need to get up and running with this SDK, including how to work with credentials.
TO learn how to work with the JavaScript SDK and DynamoDB, see:
Build an app to submit data to DynamoDB
Hi I am trying to upload a file from GCS to Gdrive using
airflow.contrib.operators.gcs_to_gdrive_operator import GcsToGDriveOperator
This is how the dag looks like
copy_to_gdrive = GcsToGDriveOperator(
task_id="copy_to_gdrive",
source_bucket="my_source_bucket_on_gcs",
source_object="airflow-dag-test/report.csv",
destination_object="/airflow-test/report.csv",
gcp_conn_id="bigquery_default",
dag=dag
)
This code executes without any errors and in the logs I can see the file is downloaded to local successfully and uploaded to gdrive successfully as well.
This code is executed by a service account, the issue i am facing is I am not able to find the file or the directory this dag is creating uploading
I have tried several permutation/combinations of path for "destination_object" but nothing seems to work also google docs are not helpful as well.
I can see in the api logs that that the drive.create api is being called but where it is creating the file is unknown. Has anyone experienced this ? any help or tip would be of great help. Thanks!
Your Service account is a Google account, and, as google account, it has access to its own drive. The file are correctly copied to Drive, but to the drive of the service account!
You never specify the account, so, how Airflow can know that it has to use yours?
Look at the operator documentation
delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
Use this parameter, fill it with your email and activate the domain delegation wide to your service account.
I configured a parse server on my AWS elastic beanstalk using this guid I've tested it and it all works fine
Now I can't find a way to deploy parse dashboard on my server.
I did deployed parse dashboard on my local host and connected it to the application on server, But this way I cannot manage (Add and remove) my apps.
Another problem is that parse dashboard missing cloud code on default, I found this on git, but I cant understand where do I add the requested endpoints, is it something like adding app.use('/scripts', express.static(path.join(__dirname, '/scripts'))); on the index.js file?
in order to deploy parse-dashboard to your EC2 you need to follow the Deploying Parse Dashboard section in parse-dashboard github page
parse-dashbard github page
Please make sure that when you deploy parse-dashboard you are using https and also basic authentication (it is also part of the guide)
Now regarding the cloud code: the ability to deploy cloud code via parse CLI and to view the nodejs code in parse dashboard are not available in parse-server but those are parse.com features. Cloud code in parse-server is handled by modifying the main.js file which exist under the cloud folder and deployment should be done manually by you but the big advantage in parse-server cloud code is that you can use any NodeJS module that you want from there and you are not restricted to the modules that were used by parse.com .
Another point about the dashboard. What you can do is to create an express application and then add parse-server and parse-dashboard as a middleware to your express application and deploy the whole application to AWS and then you can enjoy both parse-server (that will be available under the /parse path, unless you changed it to something else) and parse dashboard that will be available under the /dashboard path
Enjoy :)
I would like to understand the steps for deploying a WSO2 server to a production environment.
Using WSO2 ESB as an example, I have seen instructions for extracting the binary and running the
startup script, but just these steps don't seem robust enough for a production environment - in a production environment, I would expect to see some additional steps:
what directory is the normally used for installing? /opt, /usr/local, something else?
create a unix user account and unix group for running the service
setting up ulimits - are ulimits normally configured for wso2 services?
creating init.d scripts for starting the service automatically (there is a blog here, but as discussed in the blog comments, the blog seems to go against a warning in the official ESB document not to install the service as a daemon)
to security harden the service:
e.g. replace self signed certificates - which certificates?
e.g. change default passwords - which user accounts?
what else needs to be security hardened?
configuring clustering (this seems to be documented here)
configuring the credential store:
production database credential store (this seems to be documented here), or
ldap credential store
Question: What are the steps required to deploy a WSO2 server to a production environment?
Question: I've also seen some puppet scripts. Are these scripts production ready?
NOTE: I've previously posted this question on the wso2 mailing lists which is primarily attended by WSO2 employees. I'm also posting here to the user community who hopefully have put some wso2 servers into production.
what directory is the normally used for installing? /opt, /usr/local, something else?
/opt
create a unix user account and unix group for running the service
Have an Admin user to install/run the server.
setting up ulimits - are ulimits normally configured for wso2 services?
You need to define ulimits at OS level
to security harden the service: e.g. replace self signed
certificates - which certificates? e.g. change default passwords -
which user accounts? what else needs to be security hardened?
You need to change self signed default server certs (certs are in wso2carbon.jks)
Have an strong admin password and encrypt it in all config files using cipher tool
You can check the documentation for further info.
I am wondering how exactly to programmatically login to another user account in Windows since Vista removed GINA. I have read that creating a credential provider is the replacement although I have yet to see an example of a credential provider logging in or loading the windows files like GINA did. I have seen third party terminal servers do this before. I want to be able to log another user in and load windows files while still logged into the original account. How is this done?