supply token in npmrc during build - amazon-web-services

I am using AWS Codeartifact within my project as a private NPM registry (and proxy of course) and i have some issues getting the perfect workflow. Right now i have a .sh script which generates me the Auth token for AWS and generates a project local .npmrc file. It pretty much looks like this:
#!/bin/sh
export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain xxxxx \
--domain-owner XXXXXX --query authorizationToken --output text --profile XXXXX`
export REPOSITORY_ENDPOINT=`aws codeartifact get-repository-endpoint --domain xxxxx \
--repository xxxx --format npm --query repositoryEndpoint --output text --profile xxxx`
cat << EOF > .npmrc
registry=$REPOSITORY_ENDPOINT
${REPOSITORY_ENDPOINT#https:}:always-auth=true
${REPOSITORY_ENDPOINT#https:}:_authToken=\${CODEARTIFACT_AUTH_TOKEN}
EOF
Now i dont want to run this script manually of course but it should be part of my NPM build process, so i started with things like this in package.json
"scripts": {
"build": "tsc",
"prepublish": "./scriptabove.sh"
}
When running "npm publish" (for example) the .npmrc is created nicely but i assume since NPM is already running, any changes to npmrc wont get picked up. When i run "npm publish" the second time, it works of course.
My question: Is there any way to hook into the build process to apply the token? I dont want to say to my users "please call the scriptabove.sh first before doing any NPM commands. And i dont like "scriptabove.sh && npm publish" either.

You could create a script like this
publish-package command can be called whatever you want
"scripts": {
"build": "tsc",
"prepublish": "./scriptabove.sh",
"publish-package": "npm run prepublish && npm publish"
}
Explanation:
Use & (single ampersand) for parallel execution.
Use && (double ampersand) for sequential execution.
publish-package will then run the prepublish command first then after run npm publish. This method is a great way to chain npm commands that need to run in sequential order.
For more information on this here's a StackOverflow post about it.
Running NPM scripts sequentially

Related

How to run lambda function on a schedule in my localhost?

I have a task that need to be scheduled on aws lambda function. I wrote a SAM template as below and I see it works when deploying on aws environment (my function get triggered intervally).
But we want to do testing on dev environment first before deploying. I use sam local start-api [OPTIONS] to deploy our functions to dev environment. But the problem is that, every functions configured as rest API work, but the schedule task not. I'm not sure is it possible on local/dev environment or not. If not, please suggest a solution (is it possible?). Thank you
This is template:
aRestApi:
...
...
sendMonthlyReport:
Type: AWS::Serverless::Function
Properties:
Handler: src.monthlyReport
Runtime: nodejs16.x
Events:
ScheduledEvent:
Type: Schedule
Properties:
Schedule: "cron(* * * * *)"
If you search for local testing before deployment of lambda functions you will probably be fed this resource by the quote-unquote "google": https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
However, there are other ways to do it.
I personally use the public docker image from aws.
Here is an example of the docker image created for a specific use case of mine.
FROM public.ecr.aws/lambda/python:3.8
RUN yum -y install tar gzip zlib freetype-devel \
gcc \
ghostscript \
lcms2-devel \
libffi-devel \
libimagequant-devel \
.
enter code here
enter code here
and some more dependencies ....
&& yum clean all
COPY requirements.txt ./
RUN python3.8 -m pip install -r requirements.txt
# Replace Pillow with Pillow-SIMD to take advantage of AVX2
RUN pip uninstall -y pillow && CC="cc -mavx2" pip install -U --force-reinstall pillow-simd
COPY <handler>.py ./<handler>.py
# Set the CMD to your handler
ENTRYPOINT [ "python3.8","app5.py" ]
In your case, follow instructions for node and run the docker image locally. If it works, you can then continue with aws lambda creation/update.
I see that you also have a cron job, why not use the cron job to invoke this lambda function separately and not define it in you SAML?
There are a number of ways you can invoke a lambda function based on event.
For example to invoke using cli: (For aws-cliv2)
make sure you configure awscli
# !/bin/bash
export AWS_PROFILE=<your aws profile>
export AWS_REGION=<aws region>
aws lambda invoke --function-name <function name> \
--cli-binary-format raw-in-base64-out \
--log-type Tail \
--payload <your json payload > \
<output filename>
Makes it loosely coupled.
Then you can use carlo's node cronjob suggestion to invoke is as many times you like, free of charge.
I used localstack for the demonstrate. LocalStack is a cloud service emulator that runs in a single container on your laptop or in your CI environment. You can see more detail in this link https://github.com/localstack/localstack
I would use node-cron to set a scheduler on a node file to run.
npm install --save node-cron
var cron = require('node-cron');
cron.schedule('* * * * *', () => {
console.log('running a task every minute');
});
https://www.npmjs.com/package/node-cron
You can also check for this DigitalOcean tutorial!

Error deploying AWS CDK stacks with Azure Pipelines (using python in CDK)

app.py example of how my stacks are defined (with some information changed as you can imagine)
Stack1(app, "Stack1",env=cdk.Environment(account='123456789', region='eu-west-1'))
In my azure pipeline I'm trying to do a cdk deploy
- task: AWSShellScript#1
inputs:
awsCredentials: 'Service_connection_name'
regionName: 'eu-west-1'
scriptType: 'inline'
inlineScript: |
sudo bash -c "cdk deploy '*' -v --ci --require-approval-never"
displayName: "Deploying CDK stacks"
but getting errors. I have the service connection to AWS configured, but the first error was
[Stack_Name] failed: Error: Need to perform AWS calls for account [Account_number], but no credentials have been configured
Stack_Name and Account_Number have been redacted
After this error, I decided to add a step to my pipeline and manually create the files .aws/config and .aws/credentials
- script: |
echo "Preparing for CDK"
echo "Creating directory"
sudo bash -c "mkdir -p ~/.aws"
echo "Writing to files"
sudo bash -c "echo -e '[default]\nregion = $AWS_REGION\noutput = json' > ~/.aws/config"
sudo bash -c "echo -e '[default]\naws_access_key_id = $AWS_ACCESS_KEY_ID\naws_secret_access_key = $AWS_SECRET_ACCESS_KEY' > ~/.aws/credentials"
displayName: "Setting up files for CDK"
After this I believed the credentials would be fixed but it still failed. The verbose option revealed the following error amongst the output:
Setting "CDK_DEFAULT_REGION" environment variable to
So instead of setting the region to "eu-west-1" it is being set to nothing
I imagine I'm missing something, so please, educate me and help me get this working
This happens because you're launching separate instances of a shell with sudo bash, and they don't share the credential environment variables that the AWSShellScript task is populating.
To fix the credentials issue, replace the inline script with just cdk deploy '*' -v --ci --require-approval never

How to remove an image from Artifact Registry automatically

Using gcloud I can list and remove the images I want through those commands:
gcloud artifacts docker images list LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE \
--include-tags --filter="tags:IPLA*" --filter="create_time>2022-04-20T00:00:00"
and then
gcloud artifacts docker images delete LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE:tag
I am trying to automate that so I can filter by tag name and date and run every day or week.
I've tried to use inside a cloud function, but I don't think that is allowed.
const { spawn } = require("child_process");
const listening = spawn('gcloud', ['artifacts', 'docker', 'images', 'list',
'LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE',
'--include-tags',
'--filter="tags:IPLA*"',
'--filter="create_time>2022-04-20T00:00:00"'
]);
listening.stdout.on("data", data => {
console.log(`stdout: ${data}`);
});
listening.stderr.on("data", data => {
console.log(`stderr: ${data}`);
});
listening.on('error', (error) => {
console.log(`error: ${error.message}`);
});
I get this error when running the cloud function:
error: spawn gcloud ENOENT
I accept any other solution like trigger on cloud build, terraform as long is it can live on google cloud.
You use Cloud Functions, a serverless product where you deploy your code that run somewhere, on something that you don't manage.
Here, in your code, you assume that gcloud is installed in the runtime. It's a mistake, you can't perform that assumption (that is wrong!)
However, you can use another serverless product where you manage your runtime environemnt: Cloud Run. The principle is to create your container (and therefore install what you want in it) and then deploy it. That time you can use gcloud command, because you know it exists on the VM.
However, it's not the right option. You have 2 better things
First of all, use something already done for you by a Google Cloud Developer Advocate (Seth Vargo). It's named GCR cleaner and remove images older than something
Or you can use directly the API to perform the exact same operation than GCLOUD bur without gcloud, by invoking the Artifact registry REST API. If you want to cheat and go faster, you can use the gcloud command with the --log-http parameter to display all the API call performed by the CLI. Copy the URL and parameters, and enjoy!!
Initially I started to look in the solution suggested by Guillaume, though it looked too overkill deploying a whole image just to clean the Artifact Registry. Ended up finding a lighter approach.
I create a shell script file to clean the images with the filters I wanted:
#!/usr/bin/env bash
_cleanup() {
image_path="$location-docker.pkg.dev/$project_id/$repository_id/$image_name"
echo "Starting to filter: $image_path"
tags=$(gcloud artifacts docker images list $image_path \
--include-tags \
--filter="tags:IPLA* AND UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-$older_than_days days" +'%Y-%m-%d')" \
--format='value(TAGS)')
if [ -z "$tags" ]; then
echo "No images to clean"
else
echo "Images found: $tags"
for tag in $tags; do
echo "Deleting image: $image_path:$tag"
gcloud artifacts docker images delete "$image_path:$tag" --quiet
done
fi
}
location=$1
project_id=$2
repository_id=$3
image_name=$4 #In this case I just want to clean the old branchs for same image
older_than_days=$5 #7 - Number of days in the repository
_cleanup
echo
echo "DONE"
Then I created a scheduled trigger on Cloud Build for the following cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: Clean up older versions
entrypoint: 'bash'
args: [ 'cleanup-old-images.sh', '$_LOCATION', '$PROJECT_ID','$_REPOSITORY_ID', '$_IMAGE_NAME', '$_OLDER_THAN_DAYS' ]
timeout: 1200s
##!/usr/bin/env bash
_cleanup() {
image_path="$2-docker.pkg.dev/$project_id/$1"
echo "Starting to filter: $image_path"
images=$(gcloud artifacts docker images list $image_path \
--filter="UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-1 years" +'%Y-%m-%d')" \
--format='value(IMAGE)')
if [ -z "$images" ]; then
echo "No images to clean"
else
echo "Images found: $images"
for each in $images; do
echo "Deleting image: $image_path:$each"
gcloud artifacts docker images delete "$images" --quiet
done
fi
}
project_id=$1
gcloud artifacts repositories list --format="value(REPOSITORY,LOCATION)" --project=$project_id | tee -a repo.txt
while read p; do
sentence=$p
stringarray=($sentence)
_cleanup ${stringarray[0]} ${stringarray[1]}
done < repo.txt
echo
echo "DONE"
rm -rf repo.txt
echo "Deleteing repo.txt file"

Sops unable to gcp kms decrypt file on Circleci despite GOOGLE_APPLICATION_CREDENTIALS successfully set to service account json

I am trying to configure a job on my local circleci (using docker executor, image: google/cloud-sdk:latest), and that job requires a sops gcp kms encrypted file to be decrypted. I have setup a google service account for the gcp kms decrypt service (I can run the script, to be run via the circleci job, successfully locally by decrypting the sops file via the service account, so I know the service account setup is valid). Here is how I am running my job.
1- I base64 encode the google service account json file: base64 path/to/service_aacount_file.json
2- I run circleci job, setting GCLOUD_SERVICE_KEY environment variable on circleci, with the base64 encoded content from the previous step: circleci local execute --env GCLOUD_SERVICE_KEY='<Base64EncodedServiceAccountJsonFileContent>' --job '<MyJob>'
3- Here is my circleci config:
- run:
name: <MyJob>
command: |
apt-get install -y docker
apt-get install -y sudo
cd $(pwd)/path/to/jobcode
echo $GCLOUD_SERVICE_KEY | base64 -d > ${HOME}/<MyGoogleServiceAccountJsonFile.json>
export GOOGLE_APPLICATION_CREDENTIALS="${HOME}/<MyGoogleServiceAccountJsonFile.json>"
gcloud auth activate-service-account --key-file ${HOME}/<MyGoogleServiceAccountJsonFile.json>
echo $GOOGLE_APPLICATION_CREDENTIALS
ls -halt $GOOGLE_APPLICATION_CREDENTIALS
cat $GOOGLE_APPLICATION_CREDENTIALS
sudo ./<RunJob.sh>
4- I get following error when I execute the job:
Failed to get the data key required to decrypt the SOPS file.
Group 0: FAILED
projects/<MyProject>/locations/<MyLocation>/keyRings/<MySopsKeyring>/cryptoKeys/<MyKey>: FAILED
- | Cannot create GCP KMS service: google: could not find
| default credentials. See
| https://developers.google.com/accounts/docs/application-default-credentials
| for more information.
Recovery failed because no master key was able to decrypt the file. In
order for SOPS to recover the file, at least one key has to be successful,
but none were.
5- Further, from the console output:
a- I can see that the service account was successfully activated: Activated service account credentials for: [<MyServiceAccount>#<MyProject>.iam.gserviceaccount.com]
b- The GOOGLE_APPLICATION_CREDENTIALS environment variable is set to the service account json's path: /path/to/service_account.json
c- The above file has been correctly base64 decoded and contains valid json:
{
"client_x509_cert_url": "<MyUrl>",
"auth_uri": "<MyAuthUri>",
"private_key": "<MyPrivateKey>",
"client_email": "<ClientEmail>",
"private_key_id": "<PrivateKeyId>",
"client_id": "<ClientId>",
"token_uri": "<TokenUri>",
"project_id": "<ProjectId>",
"type": "<ServiceAccount>",
"auth_provider_x509_cert_url": "<AuthProviderCertUrl>"
}
6- Some other things I have tried:
a- Tried setting google project name in environment variables, but still same error.
b- Tried setting GOOGLE_APPLICATION_CREDENTIALS to file's content, instead of file path, but again same result.
c- Tried setting GOOGLE_APPLICATION_CREDENTIALS by providing file path without quotes or single quotes, but still no difference.
d- Tried setting $BASH_ENV by doing echo 'export GOOGLE_APPLICATION_CREDENTIALS=path/to/service_account.json' >> $BASH_ENV, but same error
Please help.
Five options that could work:
Try to run the following command: gcloud auth application-default login
Try this command to set the env var: echo 'export GOOGLE_APPLICATION_CREDENTIALS=/tmp/service-account.json' >> $BASH_ENV
The other thing is that I see that runjob.sh is running under root. It could be that the gcp credentials are not visible under sudo per default. Either run the script without sudo or run the preceding commands with sudo.
As a last resort (those options worked for me, could be different in your scenario): { echo 1; echo 1; echo n; } | gcloud init
gcloud components update This sometimes works when the sdk is outdated.
config set project [PROJECT_NAME]
You can also check active accounts with: gcloud auth list

Fetching Tags in Google Cloud Builder

In the newly created google container builder I am unable to fetch git tags during a build. During the build process the default cloning does not seem to fetch git tags. I added a custom build process which calls git fetch --tags but this results in the error:
Fetching origin
git: 'credential-gcloud.sh' is not a git command. See 'git --help'.
fatal: could not read Username for 'https://source.developers.google.com': No such device or address
# cloudbuild.yaml
#!/bin/bash
openssl aes-256-cbc -k "$ENC_TOKEN" -in gcr_env_vars.sh.enc -out gcr_env_vars.sh -
source gcr_env_vars.sh
env
git config --global url.https://${CI_USER_TOKEN}#github.com/.insteadOf git#github.com:
pushd vendor
git submodule update --init --recursive
popd
docker build -t gcr.io/project-compute/continuous-deploy/project-ui:$COMMIT_SHA -f /workspace/installer/docker/ui/Dockerfile .
docker build -t gcr.io/project-compute/continuous-deploy/project-auth:$COMMIT_SHA -f /workspace/installer/docker/auth/Dockerfile .
This worked for me, as the first build step:
- name: gcr.io/cloud-builders/git
args: [fetch, --depth=100]
To be clear, you want all tags to be available in the Git repo, not just to trigger on tag changes? In the latter, the triggering tag should be available IIUC.
I'll defer to someone on the Container Builder team for a more detailed explanation, but that error tells me that they used gcloud to clone the Google Cloud Source Repository (GCSR), which configures a Git credential helper named as such. They likely did this in another container before invoking yours, or on the host. Since gcloud and/or the gcloud credential helper aren't available in your container, you can't authenticate properly with GCSR.
You can learn a bit more about the credential helper here.