Running sqlpackage.exe from AWS CodeDeploy throws an exception - amazon-web-services

I'm attempting to run sqlpackage.exe from a script executed by AWS CodeDeploy.
The sqlpackage command runs fine from a local CMD prompt when logged in as the administrator but does not run when called as part of the CodeDeploy pipeline.
The following error occurs:
An unexpected failure occurred: DacInstance with the specified instance_id does not exist..
Unhandled Exception: System.Data.SqlClient.SqlException: DacInstance with the specified instance_id does not exist.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
It would appear that a connection cannot be made to the database.
CodeDeploy runs as a windows service running under aLocal System account where as my command prompt where this works runs under the Administrator account. This is the only difference so I think this must be permissions issue.
It would appear that a dacpac needs to be installed by a user with sysadmin privileges. I attempted (as a test) to set the SQL Server user NT AUTHORITY\SYSTEM to have a role of DBCreator.
The deployment then failed with the following error.
The database settings cannot be modified. You must be a SysAdmin to apply these settings.
The database settings cannot be modified. You must be a SysAdmin to apply these settings.
An error occurred while the batch was being executed.
Updating database (Failed)
I am unsure how to proceed however. I'm guessing that making NT AUTHORITY\SYSTEM a SysAdmin is a bad idea!

CodeDeploy Host Agent Service runs as LocalSystem user, which should have NT AUTHORITY\SYSTEM and BUILTIN\Administrators privileges.
This is how the CodeDeploy agent executes your script:
powershell.exe -ExecutionPolicy Bypass -File <absolute_path_to_your_script_here>
You can try putting the executable on the root along with the appspec.yml file if you are putting it in a folder within your deployment package.
That being said, we have seem this issue with the older versions of host agent which should be resolved with the latest version released in March, 2017.

Related

Cloudwatch agent not using environment variable credentials on Windows

I'm trying to configure an AMI using a script that installs the unified Cloudwatch agent on both AWS and on premise Windows machines by using static IAM credentials for both of them. As part of the script, I set the credentials statically (as a test) using
$Env:AWS_ACCESS_KEY_ID="myaccesskey"
$Env:AWS_SECRET_ACCESS_KEY="mysecretkey"
$Env:AWS_DEFAULT_REGION="us-east-1"
Once I have the AMI, I create a machine and connect to it, and then verify the credentials are there by running aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************C6IF env
secret_key ****************SCnC env
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
But when I start the agent, I get the following error in the logs.
2022-12-26T17:51:49Z I! First time setting retention for log group test-cloudwatch-agent, update map to avoid setting twice
2022-12-26T17:51:49Z E! Failed to get credential from session: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, .
EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make EC2Metadata request
I'm using the Administrator user for both the installation of the agent and then when RDPing into the machine. Is there anything I'm missing?
I've already tried adding the credentials to the .aws/credentials file and modifying the common-config.toml file to use a profile. That way it works but in my case I just want to use the environment variables.
EDIT: I tested adding the credentials in the userdata script and modify a bit how they are created and now it seems to work.
$env:aws_access_key_id = "myaccesskeyid"
$env:aws_secret_access_key = "mysecretaccesskey"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
Now the problem is that I'm trying to start the agent at the end of the userdata script with the command from the documentation but it does nothing (I see in the agent logs the command but there is no error). If I RDP into the machine and launch the same command in Powershell it works fine. The command is:
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m onPrem -s -c file:"C:\ProgramData\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent.json"
I finally was able to make it work but I'm not sure of why it didn't before. I was using
$env:aws_access_key_id = "accesskeyid"
$env:aws_secret_access_key = "secretkeyid"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
to set the variables but then the agent was failing to initialize. I had to add
$env:aws_default_region = "us-east-1"
so it was able to run. I couldn't find the issue before because on Windows server 2022 I don't get the logs from the execution. I had to try using Windows Server 2019 to actually see the error when launching the agent.
I still don't know why the environment variables I set in the machine scope worked once logged into the machine but not when using them as part of the userdata script.

Unable to connect google compute engine, getting permission denied error

I have accidentally changed permission of the .ssh folder to 600 and now I am not able to log in to the GCP server through SSH as it's giving me permission denied error.
**Connection Failed**
You cannot connect to the VM instance because of an unexpected error. Wait a few moments and then try again.
I tried multiple options like, ssh troubleshooting instance, enabling serial console, ssh private key login.
Thanks you in advance.
One of the simple ways to fix this would be to use a startup script. In this script just execute chmod 700 /path/to/your/.ssh.
The startup scripts are executed with root privileges, so it should be able to fix your problem with .ssh folder permissions.
So, what you need to do:
Set the startup script.
Restart the VM.
Wait a minute or two to make sure the script got executed.
Remove the startup script from the machine. (no need to restart again)
Thank you guys for all your support, my problem got solved by follwing below document:
Serial Console with local password using a startup script

AWS Device Farm - Schedule Run - Errors

I am hoping someone here has come across this issue and has an answer for me.
I have setup a project in device farm and have written automation tests in Appium using JS.
When I create a run manually using the console the runs succeed without any issues and my tests get executed.
However when I try and schedule a run using the CLI using the following command it fails with an error
aws devicefarm schedule-run --project-arn projectArn --app-arn appArn --device-pool-arn dpARN --name myTestRun --test type=APPIUM_NODE,testPackageArn="testPkgArn"
Error : An error occurred (ArgumentException) when calling the ScheduleRun operation: Standard Test environment is not supported for testType: APPIUM_NODE
Cli Versions : aws-cli/1.17.0 Python/3.8.1 Darwin/19.2.0 botocore/1.14.0
That is expected currently for the standard environment. The command will need to use the custom environment which the cli can do by setting the testSpecArn value.
This arn is an upload in device farm consisting of a .yaml file which defines how the tests are executed.
This process is discussed here
https://docs.aws.amazon.com/devicefarm/latest/developerguide/how-to-create-test-run.html#how-to-create-test-run-cli-step6
The error in this case is caused by the fact that the APPIUM_NODE test type can only be used with the custom environment currently.

Django + PyOrient : db_create socket.timeout exception in EC2

I'm using orientDB community version 2.2.35 and pyorient 1.5.5.
client.db_create(db_name, pyorient.DB_TYPE_GRAPH, pyorient.STORAGE_TYPE_PLOCAL)
This runs perfectly fine locally after starting the server.
But when I run the same code on an ec2 machine, it throws socket.timeout exception.
I initially thought it could be a CORS issue, but it's not. What else could be the issue?
Running the orientDB server using sudo command fixed the issue.
db_create() tries to modify OSystem/dirty.fl file and I got permission denied exception on server logs.
FileNotFoundException: /home/ubuntu/orientdb-community-2.2.35/databases/OSystem/dirty.fl (Permission denied)

Jenkins AccessDeniedException upon trying to enable security on Jenkins on an EC2

Last night when I was trying to set up Jenkins, from the jenkins.war file, I was trying to enable security, via username/password for it. I clicked the "Disable read access to anonymous" checkbox, and right after doing that, I got this screen , even after logging in with the new credentials I just created. I have tried the following (which has resulted in this screen still):
removing anything on the EC2 that had to deal with Jenkins (sudo find / -name "*jenkins*" followed by sudo rm [-rf] on anything that popped up in the results)
re-visiting that site after doing the above option
re-installing the WAR file
installing Jenkins as a service
attempting login again
Is there a way out of this?
I should have checked the processes and killed the one that was Jenkins. The process somehow outlived its JAR/WAR executable!