When I am using aws cli commands it adds debug data to its output.
Is there a way to make it quiet?
Here is my use case:
# get deployed version
COMMAND="git describe --tags"
aws ecs execute-command --cluster="${CLUSTER}" --task="${TASK}" --container="${SERVICE}" --command="${COMMAND}" --interactive > VERSION
The issue is that instead of expected contents of VERSION file (just the version number):
0.0.67
I have something like that:
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
Starting session with SessionId: ecs-execute-command-123456789abcdefgh
0.0.67
Exiting session with sessionId: ecs-execute-command-123456789abcdefgh.
How can I get rid of the debug data?
I already tried adding --quiet parameter (parameter does not exist)
and redirecting error output, none helped.
Related
I'm trying to configure an AMI using a script that installs the unified Cloudwatch agent on both AWS and on premise Windows machines by using static IAM credentials for both of them. As part of the script, I set the credentials statically (as a test) using
$Env:AWS_ACCESS_KEY_ID="myaccesskey"
$Env:AWS_SECRET_ACCESS_KEY="mysecretkey"
$Env:AWS_DEFAULT_REGION="us-east-1"
Once I have the AMI, I create a machine and connect to it, and then verify the credentials are there by running aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************C6IF env
secret_key ****************SCnC env
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
But when I start the agent, I get the following error in the logs.
2022-12-26T17:51:49Z I! First time setting retention for log group test-cloudwatch-agent, update map to avoid setting twice
2022-12-26T17:51:49Z E! Failed to get credential from session: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, .
EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make EC2Metadata request
I'm using the Administrator user for both the installation of the agent and then when RDPing into the machine. Is there anything I'm missing?
I've already tried adding the credentials to the .aws/credentials file and modifying the common-config.toml file to use a profile. That way it works but in my case I just want to use the environment variables.
EDIT: I tested adding the credentials in the userdata script and modify a bit how they are created and now it seems to work.
$env:aws_access_key_id = "myaccesskeyid"
$env:aws_secret_access_key = "mysecretaccesskey"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
Now the problem is that I'm trying to start the agent at the end of the userdata script with the command from the documentation but it does nothing (I see in the agent logs the command but there is no error). If I RDP into the machine and launch the same command in Powershell it works fine. The command is:
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m onPrem -s -c file:"C:\ProgramData\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent.json"
I finally was able to make it work but I'm not sure of why it didn't before. I was using
$env:aws_access_key_id = "accesskeyid"
$env:aws_secret_access_key = "secretkeyid"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
to set the variables but then the agent was failing to initialize. I had to add
$env:aws_default_region = "us-east-1"
so it was able to run. I couldn't find the issue before because on Windows server 2022 I don't get the logs from the execution. I had to try using Windows Server 2019 to actually see the error when launching the agent.
I still don't know why the environment variables I set in the machine scope worked once logged into the machine but not when using them as part of the userdata script.
I am trying to athenticate to the gcloud sdk using : gcloud init.
I get a URL I'm supposed to access in order to copy a token and return it to the CLI... but instead of a token, I get this error :
Erreur d'autorisation
Erreur 400 : invalid_request
Missing required parameter: redirect_uri
Is this a bug?
gcloud version info:
Google Cloud SDK 377.0.0
alpha 2022.03.10
beta 2022.03.10
bq 2.0.74
bundled-python3-unix 3.8.11
core 2022.03.10
gsutil 5.8
I am running gcloud init on wsl2 (Ubuntu 18.04). This error occurs right after the installation of gcloud with sudo apt install google-cloud-sdk.
I had the same problem and gcloud has slightly changed the way their auth flow works.
Run gcloud auth login and then copy the whole output (not just the URL) to a terminal on a computer that has both a web browser and gcloud CLI installed. The command you should copy looks like
gcloud auth login --remote-bootstrap="https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=****.apps.googleusercontent.com&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&state=****&access_type=offline&code_challenge=****&code_challenge_method=S256&token_usage=remote"
When you run that on your computer that has a web browser, it will open a browser window and prompt you to log in. Once you authorize your app in the web browser you get a new URL in your terminal that looks like
https://localhost:8085/?state=****&code=****&scope=email%20openid%20https://www.googleapis.com/auth/userinfo.email%20https://www.googleapis.com/auth/cloud-platform%20https://www.googleapis.com/auth/appengine.admin%20https://www.googleapis.com/auth/compute%20https://www.googleapis.com/auth/accounts.reauth&authuser=0&hd=****&prompt=consent
Paste this new URL back into the prompt in your headless machine after Enter the output of the above command: (in your case, this would be in your WSL2 terminal). Press enter and you get the output
You are now logged in as [****].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
[8]+ Done code_challenge_method=S256
Try
gcloud init --console-only
Then you will get the url which will work.
You must log in to continue. Would you like to log in (Y/n)? y
WARNING: The --[no-]launch-browser flags are deprecated and will be removed on June 7th 2022 (Release 389.0.0). Use --no-browser to replace --no-launch-browser.
Go to the following link in your browser:
https://accounts.google.com/o/o....
update 2022-06-20. option console-only is removed for version 389.0.0.
So instead use
gcloud init --no-browser
There are some workarounds and they depend on your particular Windows environment.
In this post and in this one you can check the most related issues with respect to gcloud running in WSL.
Here you can find some Google groups related threads that might be helpful.
Finally, you could check some related Windows troubleshootings that can help in issues related to WSL2 on your own environment.
EDIT:
it seems this answer and the one from #K.I. give other commands that don't rely on implementation details. I've tested those 3 commands:
gcloud init --console-only
gcloud auth login --no-launch-browser
gcloud init --no-launch-browser
Original answer, another workaround (17/07/2022):
DISPLAY=":0" gcloud auth login
is a workaround mentioned in this issue. Instead of requiring you to install gcloud CLI outside WSL2, it pretends there is a browser.
A link is printed, click it, login on your browser, and you're authenticated with the CLI.
Then run again gcloud init.
You can do it without error by using another method of gcloud installation :
curl https://sdk.cloud.google.com | bash
exec -l $SHELL #restart shell
gcloud init
I am running DynamoDB locally using the instructions here. To remove potential docker networking issues I am using the "Download Locally" version of the instructions. Before running dynamo locally I run aws configure to set some fake values for AWS access, secret, and region, and here is the output:
$ aws configure
AWS Access Key ID [****************fake]:
AWS Secret Access Key [****************ake2]:
Default region name [local]:
Default output format [json]:
here is the output of running dynamo locally:
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
I can confirm that the DynamoDB is running locally successfully by listing tables using aws cli
$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": []
}
but when I visit http://localhost:8000/shell in my browser, this is the error I get and the page does not load.
I tried running curl on the shell to see if I can get a more useful error message:
$ curl http://localhost:8000/shell
{
"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
"Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}%
I tried looking up the error above, but I don't have much choice in doing setup when running the shell merely in the browser. Any help is appreciated on how I can run the Dynamodb javascript web shell with this setting.
Software versions:
aws cli: aws-cli/2.4.7 Python/3.9.9 Darwin/20.6.0 source/x86_64 prompt/off
OS: MacOS Big Sur 11.6.2 (20G314)
DynamoDB Local Web Shell was deprecated with version 1.16.X and is not available any longer from 1.17.X to latest. There are no immediate plans for a new Web Shell to be introduced.
You can download an old version of DynamoDB Local < 1.17.X should you wish to use the Web Shell.
Available versions:
aws s3 ls s3://dynamodb-local-frankfurt/
Download most recent working version with Web Shell:
aws s3 ls s3://dynamodb-local-frankfurt/dynamodb_local_2021-04-27.tar.gz .
The next release of DynamoDB Local will have an updated README indicating its deprecation
As I answered in DynamoDB local http://localhost:8000/shell this appears to be a regression in new versions of DynamoDB Local, where the shell mysteriously stopped working, whereas in versions from a year ago it does work.
Somebody should report it to Amazon. If there is some flag that new versions require you to set to enable the shell, it isn't documented anywhere that I can find.
Update JAVA to the latest version and voila, it works!
I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.
I'm running emr-5.12.0, with Amazon 2.8.3, Hive 2.3.2, Hue 4.1.0, Livy 0.4.0, Spark 2.2.1 and Zeppelin 0.7.3 on 1 m4.large as my master node and 1 m4.large as core node.
I am trying to execute a bootstrap action that configures some parts of the cluster. One of these includes the line:
sudo sed -i '/zeppelin.pyspark.python/c\ \"zepplin.pyspark.python\" : \"python3\",' /etc/alternatives/zeppelin-conf/interpreter.json
It makes sure that the Zeppelin uses python3.4 instead of python2.7. It works fine if I execute this in the terminal after SSH'ing to the master node, but it fails when I submit it as a Custom JAR step on the AWS Web interface. I get the following error:
ed: can't read /etc/alternatives/zeppelin-conf/interpreter.json
: No such file or directory
Command exiting with ret '2'
The same thing happens if I use
sudo sed -i '/zeppelin.pyspark.python/c\ \"zepplin.pyspark.python\" : \"python3\",' /etc/zeppelin-conf/interpreter.json
Obviously I could just change it from the Zeppelin UI, but I would like to include it in the bootstrap action.
Thanks!
It turns out that a bootstrap action submitted throug the AWS EMR web interface is submitted as a regular EMR step, so it's only run on the master node. This can be seen if you click the 'AWS CLI export' in the cluster web interface. The intended bootstrap action is listed as a regular step.
Using the command line to launch a cluster with a bootstrap action bypasses this problem, so I've just used that.
Edit: Looking back at the web interface, it's pretty clear that I was adding regular steps instead of bootstrap actions. My bad!