Can a GPO expert help me out here.
I am trying to modify some of the audit policy settings on a sandbox server for testing. When I set the policy locally, log out of the server, then log back in, it gets set back to the default value of "no auditing".
If I run a RSOP on the server, the audit policy settings I am changing are all set to "not defined". I thought that I could change any policy I wanted on a Windows server via local GPO as long as the settings I am modifying are not defined in a GPO being pushed down from the domain. Is that not correct?
Since there is no domain policy setting the audit settings in question, why won't the local GPO stick?
Server is running Windows 2012 R2.
The local gpo would stick.
I assume there are colliding "classic" audit policies and advanced policies.
https://technet.microsoft.com/en-us/library/dd408940(v=ws.10).aspx
Related
This is my question (literally stolen from here, but the ultimate question is different)
"I have written some code to retrieve my secrets from the AWS Secrets Manager to be used for further processing of other components. In my development environment I configured my credentials using AWS CLI. Once the code was compiled I am able to run it from VS and also from the exe that is generated."
My question is that once it's on my IIS production server, I repeat these steps but it doesn't work, because I run the steps as the user account I'm logged in as, but the IIS process doesn't run as the logged in user, so the code can't get what it needs.
I want the IIS process to be able to access these credentials under its own user profile. How do I place the credentials under that profile? This question seems to have the answer (C:\Users\<IIS_app_name>\.aws\credentials) but how do I actually access the <IIS_app_name>? Or figure out what it is? I attempt to access this path with my II_app_name, but I get an error that it doesn't exist.
This is an on prem server that accesses aws secrets manager.
We have a few elastic beanstalk applications and want to set up users that can see the events, status and logs for those applications in the AWS console. They should however not be able to even see the configuration. Is this possible? If I try to include ALL the actions apart from DescribeConfigurationSettings then that user cannot view the environment at all. So it appears that I have to use Describe* to allow the user to even access the environment.
Do I have to make them use the eb cli to fetch the logs or is there away to construct a policy so that they can view an environment but basically can not access the Configuration part of the environment
I have some trouble concerning the RDS / Managed AD connection:
I've set up the AWS Managed Microsoft AD and added some users.
Then, I've set up an MS-SQL Database in RDS.
Now, while accessing it via SQL Server Management Studio works flawlessly I simply cannot add the AD users I've created.
I get the following error: The program cannot open the required dialog box because it cannot determine whether the computer named "Network Name Resource" is joined to a domain
Looking at the AD, I can see that the RDS instance is indeed missing.
How can that be? In the RDS console I can it clearly being attached to the Domain?
Have searched this issue for quite some time and hope someone can help me out here...
You must be signed into SSMS via domain account with privileges to add/modify users' logins for that search box to work.
Furthermore, it is non-obvious but you can confirm that your RDS instance is in the domain by using ADAC or ADUC and looking under: AWS Reserved > RDS
We have a scenario where a secure agent is installed on an ms server environment with a specific Informatica cloud user.
Now we would like to change the user which is being used by the secure agent. What would be the procedure to change username and password to another Informatica cloud user. Are there any precautions to assure like visibility and rights?
I have doubts about your question, but from what I understand you want to change the agent owner, right?
To change the owner of the secure agent
for windows:
Change user agent secure - for windows
for linux:
Change user agent secure - for linux
Or in the last case you can delete the secure agent folder and perform a new installation.
I'm deploying my Django application on ec2 on AWS.
I did configuration setting up ~/.boto and finally succeed in 'python manage.py collectstatic'.
If there is an error, then error is caused! (I know because I solved it by setting up ~/.boto configuration file!).
But after configuration , when I query my image file at S3 mapped to my imageField model, it shows the error message below:
No handler was ready to authenticate. 1 handlers were checked.
['HmacAuthV1Handler'] Check your credentials
I think I made it authentication, but why is this message occuring?
Using a role is absolutely the correct way to handle authentication in EC2 to AWS. Putting long term credentials on the machine is a disgusting alternative. Assuming you're using a standard SDK, ( and boto absolutely is), the SDK will automatically use the role's temporary credentials to authenticate, so all you have to do is launch the instance with an "instance profile" specifying a role, and you get secure credentials delivery for free.
You'll have to replace your server to do so but_being able to recreate servers is fundamental to success in aws anyway. The sooner you start thinking that way, the better the cloud will work for you.
Once the role is attached to the instance, the policies defining the role's permission can be modified dynamically. So you don't need to get the permissions sorted out before creating the role.
At the high level, you specify a role at instance creation time. The EC2 console can facilitate the process of creating a role, allowing the EC2 service to access it, and specifying at instance creation time.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html provides detailed instructions.