I was testing remote logging to the s3 bucket path to send airflow task logs. Previously it was working fine. We were able to see logs in airflow UI when there was no environment variable set for remote_logging but after setting logging.remote_logging = True and giving two more values of logging.remote_log_conn_id and logging.remote_base_log_folder as environment variables inside Airflow configuration options of AWS mwaa. We faced this error.
Error:
*** Log file does not exist: /usr/local/airflow/logs/DAG_test/test_mail/2022-11-22T06:42:54.483935+00:00/1.log
*** Fetching from: http://ip-10-192-11-229.us-west-2.compute.internal:8793/log/DAG_test/test_mail/2022-11-22T06:42:54.483935+00:00/1.log
*** Failed to fetch log file from worker. timed out
We have reverted back the environment variable logging.remote_logging = False inside Airflow configuration options of AWS mwaa
still the same issue.
Note: I have not removed or reverted the other two environment variable values.
Any reason or help is appreciated.
To help anyone facing the same issue.
It has a permission issue for the S3 bucket for conn_id provided to it via logging.remote_log_conn_id. So, after providing access to the given S3 bucket using logging.remote_base_log_folder, logs were stored and visible on the Airflow UI as well.
Related
I am unable to call aws services from fargate tasks - secrets manager and sns.
I want these services to be invoked from inside the docker image which is hosted on ECR. When I run the pipeline everything loads and run correctly except when the script inside the docker container is invoked, it throws an error. The script makes a call to either secrets manager or sns. The error thrown is -
Unable to locate credentials. You can configure credentials by running "aws configure".
If I do aws configure then the error goes away and every things works smoothly. But I do not want to store the aws credentials anywhere.
When I open task definitions I can see two roles - pipeline-task and ecsTaskExecutionRole
Although, I have given full administrator rights to both of these roles, the pipeline still throws error. Is there any place missing where I can assign roles/policies etc. I want to completely avoid using aws configure.
If the script with the issue is not a PID 1 process ( used to stop and start the container ), it will not automatically read the Task Role (pipeline-task-role). From your description, this sounds like the case.
Add this to your Dockerfile:
RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profile
The AWS SDK from the script should know where to pick up the credentials from after that.
I don't know if my problem was the same as yours but I also experienced this kind of problem where I had been set the task role but the container don't get the right permissions. After spending a few days, I discovered that if you set any of the AWS environment variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
or AWS_DEFAULT_REGION
into the task definition, the "Credential provider chain" will stop at the "Static credentials" step, so the SDK your script is using will look for the other credentials within the .aws/.credentials file and as it can't find them it throws Unable to locate credentials.
If you want to know further about the Credential provider chain you could read about it in https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html
I tried to deploy a cloud function on the google cloud platform using the my console. The command I used was,
gcloud functions deploy function_name --runtime=python37 --memory=1024MB --region=asia-northeast1 --allow-unauthenticated --trigger-http
But I am getting this error,
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: could not resolve storage source: googleapi: Error 404: Not Found, notFound
I tried googling around but it seems like no one had faced this error message before. I have also tried changing project and deployment is working fine.
gcloud config set project another_project
Appreciate it if anyone have any idea on what is causing this error and how I can solve it. Thanks!
As per the documentation here -
https://cloud.google.com/functions/docs/building
it says that : Because Cloud Storage is used directly in your project, the source code directory for your functions is visible, in a bucket named:
gcf-sources-<PROJECT_NUMBER>-<REGION>
Therefore, if you delete the bucket in cloud storage, then you need to re create this bucket.
For example if your project number is 123456789 running on asia-south1 then the bucket name should be:
gcf-sources-123456789-asia-south1
Once you re create the bucket then you can use gcloud or firebase cli to deploy and it should work normally.
Hope this helps. It worked for me!
Enjoy!
Please check if a bucket named gcf-sources-**** is available.
If not, you will need to contact gcloud support to request that particular bucket to be rebuiled.
Update:
https://issuetracker.google.com/175866925
Update from GCP Support: That does not resolve my problem at all.
First they said they need to recreate that bucket. Meanwhile they said that does not resolve the problem and they are still investigating the problem.
Just for testing I created that Bucket my self as Oru said.
Still the same error. I will update this tread when I got new information.
I have entered AWS credentials in Jenkins at /credentials, however they do not show up in the drop down list for the Post Build steps in the AWS Elastic Beanstalk plugin.
If I click Validate Credentials, I get this strange error.
Failure
com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#5c932b96: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#32abba7: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/]
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136)
I don't know where it got that IP address. When I search for that IP in the Jenkins directory, I turn up with
-bash-4.2$ grep -r 169.254.169.254 *
plugins/ec2/AMI-Scripts/ubuntu-init.py:conn = httplib.HTTPConnection("169.254.169.254")
The contents of that file is here: https://pastebin.com/3ShanSSw
There are actually 2 different Amazon Elastic Beanstalk plugins.
AWSEB Deployment Plugin, v 0.3.19, Aldrin Leal
AWS Beanstalk Publisher Plugin, v 1.7.4, David Tanner
Neither of them work. Neither will display the credentials in the drop down list. Since updating Jenkins, I am unable to even show "Deploy to Elastic Beanstalk" as a post-build step for the first one (v0.3.19) even though it is the only one installed.
For the 2nd plugin (v1.7.4), I see this screen shot:
When I fill in what I can, and run it, it gives the error
No credentials provided for build!!!
Environment found (environment id='e-yfwqnurxh6', name='appenvironment'). Attempting to update environment to version label 'sprint5-13'
'appenvironment': Attempt 0/5
'appenvironment': Problem:
com.amazonaws.services.elasticbeanstalk.model.AWSElasticBeanstalkException: No Application Version named 'sprint5-13' found. (Service: AWSElasticBeanstalk; Status Code: 400; Error Code: InvalidParameterValue; Request ID: af9eae4f-ad56-426e-8fe4-4ae75548f3b1)
I tried to add an S3 sub-task to the Elastic Beanstalk deployment, but it failed with an exception.
No credentials provided for build!!!
Root File Object is a file. We assume its a zip file, which is okay.
Uploading file awseb-4831053374102655095.zip as s3://appname-sprint5-15.zip
ERROR: Build step failed with exception
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 7C4734153DB2BC36; S3 Extended Request ID: x7B5HflSeiIw++NGosos08zO5DxP3WIzrUPkZOjjbBv856os69QRBVgic62nW3GpMtBj1IxW7tc=), S3 Extended Request ID: x7B5HflSeiIw++NGosos08zO5DxP3WIzrUPkZOjjbBv856os69QRBVgic62nW3GpMtBj1IxW7tc=
Jenkins is hopelessly out of date and unmaintained. I added the Post Build Task plugin, installed eb tool as jenkins user, ran eb init in the job directory, edited .elasticbeanstalk/config.yml to add the lines
deploy:
artifact: target/AppName-Sprint5-SNAPSHOT-bin.zip
Then entered in the shell command to deploy the build.
/var/lib/jenkins/.local/bin/eb deploy -l sprint5-${BUILD_NUMBER}
For Eleastic beanstalk plugin right place to configure AWS key is Jenkins Master configure
http://{jenkinsURL}/configure
I'm trying to enable SSH for my AWS Elastic Beanstalk application and have run eb ssh --setup (as a user with what seem to be suitable privileges, ElasticBeanstalkFullAccess; using AWS CLI 3.x); but my attempt fails with the following (GUIDs changed to protect the innocent):
INFO: Environment update is starting.
INFO: Updating environment sitetest-develop-env's configuration settings.
INFO: Created Auto Scaling launch configuration named: awseb-e-notrea1nUm-stack-AWSEBAutoScalingLaunchConfiguration-MAdUpa2bCrCx
ERROR: Updating Auto Scaling group failed Reason: Template error: DBInstance bxzumnil42x11w doesn't exist
ERROR: Service:AmazonCloudFormation, Message:Stack named 'awseb-e-notrea1nUm-stack' aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS' Reason: The following resource(s) failed to update: [AWSEBAutoScalingGroup].
ERROR: Failed to deploy configuration.
INFO: Created Auto Scaling launch configuration named: awseb-e-myjrm7xr9n-stack-AWSEBAutoScalingLaunchConfiguration-5uKixPQCM71K
INFO: Deleted Auto Scaling launch configuration named: awseb-e-notrea1nUm-stack-AWSEBAutoScalingLaunchConfiguration-MAdUpa2bCrCx
INFO: The environment was reverted to the previous configuration setting.
What is causing this to happen? Is there something I need to do in the AWS Console to prevent this error?
the relevant error message i see here is DBInstance bxzumnil42x11w doesn't exist.
You have probably opted into letting Elastic Beanstalk create an RDS server as part of the environment creation process. Now it seems the db is no longer there. Did you kill it manually?
In any case, I would recommend NOT to let EB manage your RDS. it's best practice to create one yourself and manually assign the following environment variables: RDS_HOSTNAME, RDS_PORT, RDS_DB_NAME, RDS_USERNAME, RDS_PASSWORD.
At this point I would recommend terminating this env and creating a new one, only this time don't check the checkbox named Create an RDS DB Instance with this environment.
I am attempting to setup AWS with an Elastic Beanstalk instance I have previously created. I have entered the various details into the config file, however when I try to use aws.push I get the error message
Updating the AWS Elastic Beanstalk environment x-xxxxxx...
Error: Failed to get the Amazon S3 bucket name
I have checked the credentials in IAM and have full administrator privileges. When I run eb status show green I get the following message:
InvalidClientTokenId. The security token included in the request is invalid.
Run aws configure again, and re-enter your credentials.
It's likely you're using old (disabled or deleted) access/secret keys, or you accidentally swapped the access key with the secret key.
For me is was that my system clock was off by more than 5 minutes. Updating the time fixed the issue.