I am trying to add description to newly created version only, but no luck ;-(
I can add description to lambda and alias, but not to newly created version.
You can refer below screenshot where I want my description from pipeline.
Here is my code in yml file
- step:
name: Build and publish version
oidc: true
script:
- apt-get update && apt-get install -y zip jq
- for dir in ${LAMBDA_FUNCTION_NAME}; do
- echo $dir
- cd ./$dir && npm install --production
- zip -r code.zip *
# lambda config
# - export ENVIRONMENT="dev"
# Create lambda configuration file with environment variables
- export LAMBDA_CONFIG="{}"
- echo $LAMBDA_CONFIG > ./lambdaConfig.json
- pipe: atlassian/aws-lambda-deploy:1.5.0
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_OIDC_ROLE_ARN: '############################'
FUNCTION_NAME: $dir
COMMAND: 'update'
ZIP_FILE: 'code.zip'
FUNCTION_CONFIGURATION: "lambdaConfig.json"
WAIT: "true"
#PUBLISH_FLAG: "false"
- BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes"
# - if [ ${ALIAS} == "" ]; then echo "Setup successful"; else
- VERSION=$(jq --raw-output '.Version' $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env)
- cd .. && echo ${VERSION} > ./version.txt
- echo "Published version:${VERSION}"
- cat version.txt
- VERSION=$(cat ./version.txt)
#- fi;
- pipe: atlassian/aws-lambda-deploy:1.5.0
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_OIDC_ROLE_ARN: '########################'
FUNCTION_NAME: $dir
COMMAND: 'alias'
ALIAS: ${ALIAS}
VERSION: '${VERSION}'
DESCRIPTION: ${DESCRIPTION}
- done
Related
I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script
I am deploying a build spec for AWS codebuild using Serverless Framework. When I deploy, the new line after the first line is absent in the build spec. This resource previously deployed without a problem and I cannot see anything I have done to break it. Is this a problem on my end or a bug with Serverless/CloudFormation?
Below is the CloudFormation template and the resulting build spec copied from the AWS console.
Resources:
CodeBuild:
Type: 'AWS::CodeBuild::Project'
Properties:
Name: sls-retrobase-frontend-CodeBuild-${opt:stage}
ServiceRole: !GetAtt CodeBuildRole.Arn
Artifacts:
Type: CODEPIPELINE
Name: sls-retrobase
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
Source:
Type: CODEPIPELINE
BuildSpec: !Sub
- >
version: 0.2
phases:
pre_build:
commands:
- echo List directory files...
- ls
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo List active directory...
- ls
- echo Inserting Api Url into config.json from environment
- node makeConfig.js ${apiUrl} ${auth0Audience} ${auth0Domain} ${auth0ClientId}
- echo Build started on `date`
- npm run build
post_build:
commands:
- echo List build directory...
- ls ./build
- aws s3 cp --recursive --acl public-read ./build s3://${Website}
artifacts:
files:
- '**/*'
- apiUrl: !Join ['', [ "https://", !Ref QueryRestApi, ".execute-api.us-east-1.amazonaws.com/${opt:stage}/sls-retrobase-${opt:stage}"]]
websiteUrl: !GetAtt Website.WebsiteURL
auth0Audience: '***'
auth0Domain: '***'
auth0ClientId: '***'
build spec:
version: 0.2 phases:
pre_build:
commands:
- echo List directory files...
- ls
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo List active directory...
- ls
- echo Inserting Api Url into config.json from environment
- node makeConfig.js https://h0suk54yw0.execute-api.us-east-1.amazonaws.com/test/sls-retrobase-test https://sls-retrobase https://dev-y33gimcf.eu.auth0.com/ Xn6SDc43vE8P0sQHkVLtiBSBVFT5rJMU
- echo Build started on `date`
- npm run build
post_build:
commands:
- echo List build directory...
- ls ./build
- aws s3 cp --recursive --acl public-read ./build s3://serverless-retrobase-resources-test-website-7cvhqlgbkfj7
artifacts:
files:
- '**/*'
This is probably because of your use of >. Please change it to |:
BuildSpec: !Sub
- |
version: 0.2
phases:
Alternatively, fix your spaces when using >. > and your code are not aligned.
I want to Do CI/CD with CircleCI to ECR, ECS.
Dockerfiles works correctly in local with docker-compose.
but, I am getting the following error in CircleCI.
COPY failed: stat /var/lib/docker/tmp/docker-builder505910231/b-plus-app/build: no such file or directory
Here is the relevant code where the error occurred.
↓Dockerfile(react)↓
FROM node:14.17-alpine
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN rm -r -f b-plus-app/build && cd b-plus-app \
&& rm -r -f node_modules && npm i && npm run build
↓Dockerfile(nginx)↓
FROM nginx:1.15.8
RUN rm -f /etc/nginx/conf.d/*
RUN rm -r -f /usr/share/nginx/html
#Stop Here
COPY b-plus-app/build /var/www
COPY prod_conf/ /etc/nginx/conf.d/
CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf
↓.circleci/config.yml↓
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.15
aws-ecs: circleci/aws-ecs#2.0.0
workflows:
react-deploy:
jobs:
- persist_to_workspace:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL
region: AWS_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
create-repo: true
path: 'front/'
repo: front
tag: "${CIRCLE_SHA1}"
filters:
branches:
only: main
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'b_plus_service'
cluster-name: 'b-plus'
service-name: 'b-plus'
container-image-name-updates: "container=front,tag=${CIRCLE_SHA1}"
nginx-deploy:
jobs:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL
region: AWS_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
create-repo: true
dockerfile: Dockerfile.prod
path: 'front/'
repo: nginx
tag: "${CIRCLE_SHA1}"
#requires:
# - react-deploy:
# - rails-deploy:
filters:
branches:
only: main
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'b_plus_service'
cluster-name: 'b-plus'
service-name: 'b-plus'
container-image-name-updates: "container=nginx,tag=${CIRCLE_SHA1}"
If you know how to fix the problem, please let me know. Thank you for reading my question.
I want to deploy aws lamda .net core project using bit bucket pipeline
I have created bitbucket-pipelines.yml like below but after build run getting error -
MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
file code -
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- export PROJECT_NAME=TestAWS/AWSLambda1/AWSLambda1.sln
- dotnet restore
- dotnet build $PROJECT_NAME
- pipe: atlassian/aws-lambda-deploy:0.2.1
variables:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: 'us-east-1'
FUNCTION_NAME: 'my-lambda-function'
COMMAND: 'update'
ZIP_FILE: 'code.zip'
project structure is like this -
The problem is here:
PROJECT_NAME=TestAWS/AWSLambda1/AWSLambda1.sln
This is the incorrect path. Bitbucket Pipelines will use a special path in the Docker image, something like /opt/atlassian/pipelines/agent/build/YOUR_PROJECT , to do a Git clone of your project.
You can see this when you click on the "Build Setup" step in the Pipelines web console:
Cloning into '/opt/atlassian/pipelines/agent/build'...
You can use a pre-defined environment variable to retrieve this path: $BITBUCKET_CLONE_DIR , as described here: https://support.atlassian.com/bitbucket-cloud/docs/variables-in-pipelines/
Consider something like this in your yml build script:
script:
- echo $BITBUCKET_CLONE_DIR # Debug: Print the $BITBUCKET_CLONE_DIR
- pwd # Debug: Print the current working directory
- find "$(pwd -P)" -name AWSLambda1.sln # Debug: Show the full file path of AWSLambda1.sln
- export PROJECT_NAME="$BITBUCKET_CLONE_DIR/AWSLambda1.sln"
- echo $PROJECT_NAME
- if [ -f "$PROJECT_NAME" ]; then echo "File exists" ; fi
# Try this if the file path is not as expected
- export PROJECT_NAME="$BITBUCKET_CLONE_DIR/AWSLambda1/AWSLambda1.sln"
- echo $PROJECT_NAME
- if [ -f "$PROJECT_NAME" ]; then echo "File exists" ; fi
I'm trying to upgrade from CIRCLE 1.0 to 2.0 & I'm having trouble getting the Docker images to build. I've got the following job:|
... There is another Job here which runs some tests
deploy-aws:
# machine: true
docker:
- image: ecrurl/backend
aws_auth:
aws_access_key_id: ID1
aws_secret_access_key: $ECR_AWS_SECRET_ACCESS_KEY # or project UI envar reference
environment:
TAG: $CIRCLE_BRANCH-$CIRCLE_SHA1
ECR_URL: ecrurl/backend
DOCKER_IMAGE: $ECR_URL:$TAG
STAGING_BUCKET: staging
TESTING_BUCKET: testing
PRODUCTION_BUCKET: production
NPM_TOKEN: $NPM_TOKEN
working_directory: ~/backend
steps:
- run:
name: Install awscli
command: sudo apt-get -y -qq install awscli
- checkout
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker pull $ECR_URL:latest
docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
workflows:
version: 2
build-deploy:
jobs:
- build # This one simply runs test
- deploy-aws:
requires:
- build
Running this throws the following error:
#!/bin/bash -eo pipefail
sudo apt-get -y -qq install awscli
/bin/bash: sudo: command not found
Exited with code 127
All I had todo before was this:
dependencies:
pre:
- $(aws ecr get-login --region us-west-2)
deployment:
staging:
branch: staging
- docker pull $ECR_URL:latest
- docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
- docker tag backend $DOCKER_IMAGE
- docker push $DOCKER_IMAGE
- docker tag -f $DOCKER_IMAGE $ECR_URL:latest
- docker push $ECR_URL:latest
Here is the config I've changed to make this work:
deploy-aws:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker build -t backend --build-arg .
fi
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- run:
name: Tag and push to ECR
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
Check out this link: https://github.com/builtinnya/circleci-2.0-beta-docker-example/blob/master/.circleci/config.yml#L39