I have a circleCI configuration to run my tests before merge to the master, I start my server to do my tests and the I should connect to my RDS database and its protected with security groups I tried to whitelist circleci ip to allow this happen but with no luck
version: 2.1
orbs:
aws-white-list-circleci-ip: configure/aws-white-list-circleci-ip#1.0.0
aws-cli: circleci/aws-cli#0.1.13
jobs:
aws_setup:
docker:
- image: cimg/python:3.11.0
steps:
- aws-cli/install
- aws-white-list-circleci-ip/add
build:
docker:
- image: cimg/node:18.4
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- run:
name: start the server
command: npm start
background: true
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
- aws-white-list-circleci-ip/remove
workflows:
build-workflow:
jobs:
- aws_setup:
context: aws_context
- build:
requires:
- aws_setup
context: aws_context
my context environment
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_SECRET_ACCESS_KEY
GROUPID
the error
the orbs I am using
https://circleci.com/developer/orbs/orb/configure/aws-white-list-circleci-ip
I figure it out
version: 2.1
orbs:
aws-cli: circleci/aws-cli#0.1.13
jobs:
build:
docker:
- image: cimg/python:3.11.0-node
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- aws-cli/install
- run:
command: |
public_ip_address=$(wget -qO- http://checkip.amazonaws.com)
echo "this computers public ip address is $public_ip_address"
aws ec2 authorize-security-group-ingress --region $AWS_DEFAULT_REGION --group-id $GROUPID --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 7000, \"IpRanges\": [{\"CidrIp\": \"${public_ip_address}/32\",\"Description\":\"CircleCi\"}]}]"
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
build-workflow:
jobs:
- build:
context: aws_context
Related
I have some celery tasks in situated in my django app, in my tests section of the project I have several celery task that does some database work using django orm.
In my local environment, pytest works fine but in github actions the following error is shown.
kombu.exception.OperationalError
In my pytest conftest.py file I have used the following setup [ taken from celery docs ]
#pytest.fixture(scope="session")
def celery_config():
return {"broker_url": "amqp://", "result_backend": "redis://"}
but still, the exception is thrown. So, How can I properly create a github workflow that can test celery tasks without the above exception?
My github workflow:
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ '3.10' ]
services:
# Label used to access the service container
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
ports:
- 9400:9200
postgres:
# Docker Hub image
image: postgres:10.8
# Provide the password for postgres
env:
POSTGRES_USER: django
POSTGRES_PASSWORD: django
POSTGRES_DB: django
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
# Docker Hub image
image: redis
# Set health checks to wait until redis has started
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout#v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with pytest
env:
ENV: TEST
run: pytest
I've deployed my Django React app previously through a dedicated server and now I am trying to achieve the same with Azure Web App function so I can use CI/CD easier. I've configured my project as below but only my django appears to deploy as I get a '404 main.js and index.css not found'.
This makes me think there is an issue with my static file configuration but I'm unsure.
.yml file:
name: Build and deploy Python app to Azure Web App - test123
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
working-directory: ./frontend
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.8'
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: |
pip install -r requirements.txt
python manage.py collectstatic --noinput
- name: Zip artifact for deployment
run: zip pythonrelease.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: python-app
path: pythonrelease.zip
# Optional: Add step to run tests here (PyTest, Django test suites, etc.)
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: python-app
path: .
- name: unzip artifact for deployment
run: unzip pythonrelease.zip
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
id: deploy-to-webapp
with:
app-name: 'test123'
slot-name: 'Production'
publish-profile: ${{ secrets.secret}}
settings.py
STATIC_URL = '/static/'
STATIC_ROOT = 'staticfiles'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "static"),
)
Repo Structure:
Any advice would be greatly appreciated.
Cheers
To host static files in your web app, add the whitenoise package to requirements.txt and the configuration for it to settings.py. as mentioned here : Django Tips
requirements.txt | whitenoise==4.1.2
I have followed this question How can I connect GitHub actions with AWS deployments without using a secret key?.
however i am trying to go one step further by dpeloying a lambda function using serverless.
what i have tried so far.
name: For Production
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Configure AWS
run: |
sleep 5 # Need to have a delay to acquire this
export AWS_ROLE_ARN=arn:aws:iam::xxxxxxx:role/my-role
export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
export AWS_DEFAULT_REGION=ap-southeast-1
echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
curl -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"$ACTIONS_ID_TOKEN_REQUEST_URL&audience=githubactions" \
| jq -r '.value' > $AWS_WEB_IDENTITY_TOKEN_FILE
sls deploy --stage prod --verbose
working-directory: './backend-operations'
# - name: Deploy to AWS
# run: serverless deploy --stage prod --verbose
# working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}
I solved it using this using aws-actions/configure-aws-credentials github actions, as it sets temporary access key and id to environment.
Hence no need of creating aws programmticv keys from here on.
Note:- latest update of github OIDC has changed its domain name -> https://token.actions.githubusercontent.com
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Production-Deployment
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: ap-southeast-1
role-to-assume: ${{secrets.ROLE_ARN}}
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Serverless Authentication
run: sls config credentials --provider aws --key ${{ env.AWS_ACCESS_KEY_ID }} --secret ${{ env.AWS_SECRET_ACCESS_KEY }}
- name: Deploy to AWS
run: serverless deploy --stage prod --verbose
working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}
I am deploying a build spec for AWS codebuild using Serverless Framework. When I deploy, the new line after the first line is absent in the build spec. This resource previously deployed without a problem and I cannot see anything I have done to break it. Is this a problem on my end or a bug with Serverless/CloudFormation?
Below is the CloudFormation template and the resulting build spec copied from the AWS console.
Resources:
CodeBuild:
Type: 'AWS::CodeBuild::Project'
Properties:
Name: sls-retrobase-frontend-CodeBuild-${opt:stage}
ServiceRole: !GetAtt CodeBuildRole.Arn
Artifacts:
Type: CODEPIPELINE
Name: sls-retrobase
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
Source:
Type: CODEPIPELINE
BuildSpec: !Sub
- >
version: 0.2
phases:
pre_build:
commands:
- echo List directory files...
- ls
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo List active directory...
- ls
- echo Inserting Api Url into config.json from environment
- node makeConfig.js ${apiUrl} ${auth0Audience} ${auth0Domain} ${auth0ClientId}
- echo Build started on `date`
- npm run build
post_build:
commands:
- echo List build directory...
- ls ./build
- aws s3 cp --recursive --acl public-read ./build s3://${Website}
artifacts:
files:
- '**/*'
- apiUrl: !Join ['', [ "https://", !Ref QueryRestApi, ".execute-api.us-east-1.amazonaws.com/${opt:stage}/sls-retrobase-${opt:stage}"]]
websiteUrl: !GetAtt Website.WebsiteURL
auth0Audience: '***'
auth0Domain: '***'
auth0ClientId: '***'
build spec:
version: 0.2 phases:
pre_build:
commands:
- echo List directory files...
- ls
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo List active directory...
- ls
- echo Inserting Api Url into config.json from environment
- node makeConfig.js https://h0suk54yw0.execute-api.us-east-1.amazonaws.com/test/sls-retrobase-test https://sls-retrobase https://dev-y33gimcf.eu.auth0.com/ Xn6SDc43vE8P0sQHkVLtiBSBVFT5rJMU
- echo Build started on `date`
- npm run build
post_build:
commands:
- echo List build directory...
- ls ./build
- aws s3 cp --recursive --acl public-read ./build s3://serverless-retrobase-resources-test-website-7cvhqlgbkfj7
artifacts:
files:
- '**/*'
This is probably because of your use of >. Please change it to |:
BuildSpec: !Sub
- |
version: 0.2
phases:
Alternatively, fix your spaces when using >. > and your code are not aligned.
I have a Django app I am trying to Selenium tests on CircleCI, but even though they run fine locally on my test environment they keep failing with a NoSuchElementException from Selenium on CircleCI.
At the beginning of most of my browser tests, I run the following method, which is what is making the tests fail:
def login():
driver.get(self.live_server_url + reverse("login"))
# FAILURE HAPPENS HERE: Not able to find the `id_email` element
driver.find_element_by_id("id_email").send_keys(u.email)
driver.find_element_by_id("id_password").send_keys("12345678")
driver.find_element_by_id("submit-login").click()
config.yml
version: 2
jobs:
build:
docker:
- image: circleci/python:3.6.5-node-browsers
environment:
CI_TESTING: 1
- image: redis
working_directory: ~/repo
steps:
- checkout
# Selenium setup
- run: mkdir test-reports
- run:
name: Download Selenium
command: |
curl -O http://selenium-release.storage.googleapis.com/3.5/selenium-server-standalone-3.5.3.jar
- run:
name: Start Selenium
command: |
java -jar selenium-server-standalone-3.5.3.jar -log test-reports/selenium.log
background: true
- restore_cache:
name: Restore Pip Package Cache
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- v1-dependencies-
- run:
name: Install Pip Dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
- save_cache:
name: Save Pip Package Cache
key: v1-dependencies-{{ checksum "requirements.txt" }}
paths:
- ./venv
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ .Branch }}-{{ checksum "yarn.lock" }}
- yarn-packages-{{ .Branch }}
- yarn-packages-master
- yarn-packages-
- run:
name: Install Yarn Dependencies
command: |
yarn install
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- node_modules/
- run:
name: Run Django Tests
command: |
. venv/bin/activate
./test.sh
- store_artifacts:
path: test-reports
destination: test-reports
Driver definition:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("headless")
drive = webdriver.Chrome(chrome_options=chrome_options)
Is my CircleCI setup wrong? I have looked into multiple pages in documentation and it all seems right to me.
https://circleci.com/docs/2.0/project-walkthrough/#install-and-run-selenium-to-automate-browser-testing
https://github.com/CircleCI-Public/circleci-demo-python-flask/blob/master/.circleci/config.yml#L16:7
https://circleci.com/docs/2.0/browser-testing/