FATAL: Received: 403 Forbidden when creating cache on Digital ocean with private gitlab runner - digital-ocean

I am running tests on digital ocean servers using gitlab runner. I want to cache gems so it won't install gems from scratch for every build. The cache section in my gitlab-ci.yml looks like this:
[runners.cache]
Type = "s3"
ServerAddress = "ams3.digitaloceanspaces.com"
AccessKey = "KEY"
SecretKey = "SECRET"
BucketName = "cache-for-builds"
Insecure = true
When the build finishes and runner tries to create the cache I see
I tried to regenerate Digital Ocean Spaces key and secret but it didn't help
Also I don't have any certs installed on my private gitlab runner bastion server
The cache space in the Digital Ocean UI looks like this:
What I am doing wrong?
How can I fix the Forbidden error?
How can I debug this error?

Seems like I was using outdated configuration format for cache section for gitlab runner version 11.5.1. The correct configuration format is:
# /etc/gitlab-runner/config.toml
[[runners]]
...
[runners.cache]
Type = "s3"
Path = "cache_for_builds"
[runners.cache.s3]
ServerAddress = "ams3.digitaloceanspaces.com"
AccessKey = "<key>"
SecretKey = "<secret>"
BucketName = "cache-for-builds"
BucketLocation = "ams3"

Related

Cloudwatch agent not using environment variable credentials on Windows

I'm trying to configure an AMI using a script that installs the unified Cloudwatch agent on both AWS and on premise Windows machines by using static IAM credentials for both of them. As part of the script, I set the credentials statically (as a test) using
$Env:AWS_ACCESS_KEY_ID="myaccesskey"
$Env:AWS_SECRET_ACCESS_KEY="mysecretkey"
$Env:AWS_DEFAULT_REGION="us-east-1"
Once I have the AMI, I create a machine and connect to it, and then verify the credentials are there by running aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************C6IF env
secret_key ****************SCnC env
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
But when I start the agent, I get the following error in the logs.
2022-12-26T17:51:49Z I! First time setting retention for log group test-cloudwatch-agent, update map to avoid setting twice
2022-12-26T17:51:49Z E! Failed to get credential from session: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, .
EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make EC2Metadata request
I'm using the Administrator user for both the installation of the agent and then when RDPing into the machine. Is there anything I'm missing?
I've already tried adding the credentials to the .aws/credentials file and modifying the common-config.toml file to use a profile. That way it works but in my case I just want to use the environment variables.
EDIT: I tested adding the credentials in the userdata script and modify a bit how they are created and now it seems to work.
$env:aws_access_key_id = "myaccesskeyid"
$env:aws_secret_access_key = "mysecretaccesskey"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
Now the problem is that I'm trying to start the agent at the end of the userdata script with the command from the documentation but it does nothing (I see in the agent logs the command but there is no error). If I RDP into the machine and launch the same command in Powershell it works fine. The command is:
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m onPrem -s -c file:"C:\ProgramData\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent.json"
I finally was able to make it work but I'm not sure of why it didn't before. I was using
$env:aws_access_key_id = "accesskeyid"
$env:aws_secret_access_key = "secretkeyid"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
to set the variables but then the agent was failing to initialize. I had to add
$env:aws_default_region = "us-east-1"
so it was able to run. I couldn't find the issue before because on Windows server 2022 I don't get the logs from the execution. I had to try using Windows Server 2019 to actually see the error when launching the agent.
I still don't know why the environment variables I set in the machine scope worked once logged into the machine but not when using them as part of the userdata script.

AWS ElasticBeanstalk Terraform DisableIMDSv1 Unknown Configuration Setting

I'm trying to disable IMDSv1 in an ElasticBeanstalk Module I'm writing. I'm looking at the available EB auto scaling setting options here. It shows that the DisableIMDSv1 is a valid setting but when I run a terraform apply it's giving me this error.
ConfigurationValidationException: Configuration validation exception: Invalid option specification (Namespace: 'aws:autoscaling:launchconfiguration', OptionName: 'DisableIMDSv1'): Unknown configuration setting.
status code: 400
I'm using a variable to loop through my settings so this is what the variable code with the DisableIMDSv1 looks like.
launch_configuration = {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "some-role"
}
disable_imds_v1 = {
namespace = "aws:autoscaling:launchconfiguration"
name = "DisableIMDSv1"
value = "true"
}
If I comment out the disable_imds_v1 part I can successfully run my terraform apply.
It looks like the DisableIMDSv1 option might be a new addition to the available beanstalk settings options. Added June 2020.
Is this a terraform issue where they don't have the option available or I need to upgrade to terraform 0.13.x? I'm using terraform version 0.12.23 with aws provider 3.2.0. I ran a terraform init -upgrade which bumped up my aws provider from 3.1.0 to 3.2.0 thinking that might fix it but I'm still seeing the Unknown configuration setting message.
I had the same issue for EB environments based on Amazon Linux 1 (AL1). I think the option is not supported for AL1. But it worked for me in AL2.
Below is an example that I use. I also use setting as a name of settings, rather then launch_configuration and disable_imds_v1 as in your case.
For example, I used aws_elastic_beanstalk_environment:
resource "aws_elastic_beanstalk_environment" "ebenv" {
# ...
# DisableIMDSv1 option will NOT work in AL1
#solution_stack_name = "64bit Amazon Linux 2018.03 v2.9.9 running PHP 7.2"
# but it will work with AL2
solution_stack_name = "64bit Amazon Linux 2 v3.1.0 running PHP 7.4"
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "EC2KeyName"
value = aws_key_pair.key.key_name
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "DisableIMDSv1"
value = "true"
}
}

Serving media files on S3 for saleor

I have used saleor,django store and hosted it on google cloud, It is working fine. Now what I wanted to do is to host media files on S3 bucket. I have created a bucket and tried some tutorials but no success. I could not find any complete step by step guide for this. If anyone can help me with this problem it will be helpful.
AWS_ACCESS_KEY_ID = os.environ.get('accessid')
AWS_SECRET_ACCESS_KEY = os.environ.get('accesskey')
AWS_STORAGE_BUCKET_NAME = os.environ.get('testbucket')
I followed this guide for saleor S3 integration: https://saleor.readthedocs.io/en/latest/deployment/s3.html
Now here is the situation I have created the bucket and have AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY and AWS_STORAGE_BUCKET_NAME
Can someone guide me from here how to serve media files on S3 for saleor
The Saleor documentation is good but a little sparse. I had more luck following these instructions (read the installation section and then the S3 one -- this is the library Saleor is using anyways):
https://django-storages.readthedocs.io/en/latest/
I would start by configuring your local-running- app to use S3 for its static files and run python manage.py collectstatic for a tighter debugging loop.
After creating the bucket you have to update your media/static urls and also add your aws keys(I recommend adding them through env variable).
AWS_ACCESS_KEY_ID = os.environ.get('accessid')
AWS_SECRET_ACCESS_KEY = os.environ.get('accesskey')
AWS_STORAGE_BUCKET_NAME = os.environ.get('testbucket'
Once done now you need to setup the configuration of your bucket and thats it.

Using Gradle plugin to push docker images to ECR

I am using gradle-docker-plugin to build and push docker images to Amazon's ECR. To do this I am also using a remote docker daemon running on an EC2 instance. I have configured a custom task EcrLoginTask to fetch the ECR authorization token using aws-java-sdk-ecr library. Relevant code looks like : -
class EcrLoginTask extends DefaultTask {
String accessKey
String secretCode
String region
String registryId
#TaskAction
String getPassword() {
AmazonECR ecrClient = AmazonECRClient.builder()
.withRegion(Regions.fromName(region))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretCode))).build()
GetAuthorizationTokenResult authorizationToken = ecrClient.getAuthorizationToken(
new GetAuthorizationTokenRequest().withRegistryIds(registryId))
String token = authorizationToken.getAuthorizationData().get(0).getAuthorizationToken()
System.setProperty("DOCKER_PASS", token) // Will this work ?
return token
}
}
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.amazonaws:aws-java-sdk-ecr:1.11.244'
classpath 'com.bmuschko:gradle-docker-plugin:3.2.1'
}
}
docker {
url = "tcp://remote-docker-host:2375"
registryCredentials {
username = 'AWS'
password = System.getProperty("DOCKER_PASS") // Need to provide at runtime !!!
url = 'https://123456789123.dkr.ecr.eu-west-1.amazonaws.com'
}
}
task getECRPassword(type: EcrLoginTask) {
accessKey AWS_KEY
secretCode AWS_SECRET
region AWS_REGION
registryId '139539380579'
}
task dbuild(type: DockerBuildImage) {
dependsOn build
inputDir = file(".")
tag "139539380579.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
task dpush(type: DockerPushImage) {
dependsOn dbuild, getECRPassword
imageName "123456789123.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
The remote docker connection works fine, ECR token is also fetched successfully and the dbuild task also gets executed successfully.
PROBLEM
The dpush task fails - "Could not push image: no basic auth credentials"
I believe this is because the authorization token received using the EcrLoginTask was not passed on to in the docker configuration closure password property.
How do I fix it ? I need to provide the credentials on the fly each time the build is executed.
Have a look at the 'gradle-aws-ecr-plugin'. It's able to get a fresh (latest) Amazon ECR docker registry token, during every AWS/Docker command call:
All Docker tasks such as DockerPullImage, DockerPushImage, etc. that
are configured with the ECR registry URL will get a temporary ECR
token. No further configuration is necessary. It is possible to set
the registry URL for individual tasks.
This should work well alongside either the gradle-docker-plugin or Netflix's nebula-docker-plugin, which is also based on, and extends, the 'bmuschko' docker plugin.
The 'gradle-aws-ecr-plugin' BitBucket homepage explains concisely how to configure both the AWS and ECR [URL] credentials.

How to create AWS Elastic Beanstalk environment using java sdk?

Can anyone help me with or provide any sources for creating the Aws Elastic beanstalk Environment using java program and depoly our application in it?
Thank you in advance.
You can download the AWS Java SDK here. It is also in the maven repository:
Maven:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.9.7</version>
</dependency>
Gradle:
'com.amazonaws:aws-java-sdk:1.9.7'
Now, onto using the sdk. You might want to read up on getting started with the aws sdk.
Here is some very watered down code to get you started:
import com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient;
import com.amazonaws.services.elasticbeanstalk.model.*;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
public class AwsTest {
public static void main(String[] args) {
AWSElasticBeanstalkClient eb = new AWSElasticBeanstalkClient();
// Create Application
CreateApplicationRequest request = new CreateApplicationRequest("myAppName");
eb.createApplication(request);
// Create Environment
CreateEnvironmentRequest envRequest = new CreateEnvironmentRequest("myAppName", "env-name");
envRequest.setSolutionStackName("64bit Amazon Linux 2014.09 v1.0.9 running Tomcat 7 Java 7");
envRequest.setVersionLabel("application Version");
eb.createEnvironment(envRequest);
// Deploy code
CreateStorageLocationResult location = eb.createStorageLocation();
String bucket = location.getS3Bucket();
File file = new File("myapp.zip");
PutObjectRequest object = new PutObjectRequest(bucket, "myapp.zip", file);
new AmazonS3Client().putObject(object);
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("myversion");
versionRequest.setApplicationName("myAppName");
S3Location s3 = new S3Location(bucket, "myapp.zip");
versionRequest.setSourceBundle(s3);
UpdateEnvironmentRequest updateRequest = new UpdateEnvironmentRequest();
updateRequest.setVersionLabel("myversion");
eb.updateEnvironment(updateRequest);
}
}
There is a small piece of code missing in the above given code under this section,
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("myversion");
versionRequest.setApplicationName("myAppName");
S3Location s3 = new S3Location(bucket, "myapp.zip");
versionRequest.setSourceBundle(s3);
You need to add eb.createApplicationVersion(versionRequest); in order to create new version with your own source files. Only then you can deploy the new version to the running instance of the environment.
A convenient method for deploying an AWS Elastic Beanstalk environment is to use the AWS Toolkit for Eclipse.
It allows you to write and test your code locally, then create an Elastic Beanstalk environment and deploy your code to the environment.
The Elastic Beanstalk management console can also be used to deploy a Java environment with a Sample Application, which you can then override with your own code.
See also:
Deploying an Application Using AWS Elastic Beanstalk
AWS Elastic Beanstalk Documentation
Here is the updated AWS SDK for Java V2 to create an Environment for Elastic Beanstalk
Region region = Region.US_WEST_2;
ElasticBeanstalkClient beanstalkClient = ElasticBeanstalkClient.builder()
.region(region)
.build();
ConfigurationOptionSetting setting1 = ConfigurationOptionSetting.builder()
.namespace("aws:autoscaling:launchconfiguration")
.optionName("IamInstanceProfile")
.resourceName("aws-elasticbeanstalk-ec2-role")
.build();
CreateEnvironmentRequest applicationRequest = CreateEnvironmentRequest.builder()
.description("An AWS Elastic Beanstalk environment created using the AWS Java API")
.environmentName("MyEnviron8")
.solutionStackName("64bit Amazon Linux 2 v3.2.12 running Corretto 11")
.applicationName("TestApp")
.cnamePrefix("CNAMEPrefix")
.optionSettings(setting1)
.build();
CreateEnvironmentResponse response = beanstalkClient.createEnvironment(applicationRequest);
To learn how to get up and running with the AWS SDK for Java V2, see https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html.