I have created a Dockerfile and created a image with grafana. I have up the docker container.
This is my Dockerfile
FROM grafana/grafana
ENV GF_AUTH_DISABLE_LOGIN_FORM "false"
ENV GF_AUTH_ANONYMOUS_ENABLED "false"
ENV GF_AUTH_ANONYMOUS_ORG_ROLE "Viewer"
ENV GF_SMTP_ENABLED "true"
ENV GF_SMTP_HOST "smtp.gmail.com:587"
ENV GF_SMTP_USER "test#gmail.com"
ENV GF_SMTP_PASSWORD ""
ENV GF_SMTP_SKIP_VERIFY "true"
ENV GF_SMTP_FROM_ADDRESS "admin#grafana.localhost"
ENV GF_SMTP_FROM_NAME "Grafana"
ENV GF_SMTP_EHLO_IDENTITY "dashboard.example.com"
I was able to up the container using this command.
docker run -d -p 6375:3000 --name grafan grafana-n
I created a notification channel
From here i can send test mail.
But I can't sent grafana forgot password reset email. when i clicked on send reset email button it displays the notification email sent but I can't see it in my gmail.
But if i locally up it with configs in
http://localhost:3000/
I can send reset password email. what can be the issue?
Related
Can grafana tempo backend sign (sigv4) it's request that it sends to aws prometheus (AMP)?
metrics_generator:
registry:
external_labels:
source: tempo
cluster: example
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: https://aps-workspaces.eu-central-1.amazonaws.com/workspaces/ws-2354ezthd34w4ter/api/v1/remote_write
send_exemplars: true
Or is there a proxy server that can be run in the middle between tempo and prometheus that does the signing job?
aws-sigv4-proxy solves this issue for me.
docker run --name sigv4proxy -ti --rm \
--network=host \
public.ecr.aws/aws-observability/aws-sigv4-proxy:1.6.1 \
-v --name aps --region eu-central-1 \
--host aps-workspaces.eu-central-1.amazonaws.com
Now tempo can use localhost to access AMP (aws managed prometheus)
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://localhost:8080/workspaces/ws-1d8a668e-382b-4c49-9354-ad099f2b6260/api/v1/remote_write #http://prometheus:9090/api/v1/write
send_exemplars: true
I have a django app deployed in a docker container in an azure appservice.
It works fine on the provided URL: https://xxxx.azurewebsites.net
I then set up a custom domain through azure using a cname xxxx.mysite.com. I verified the domain, and then purchased an SSL cert through azure and bound it to my custom domain.
Now the app pulls up to the login screen, but authentication fails. I am not sure what I am missing. I also cannot figure out how to access any HTTP or nginx logs with in the app service.
docker-compose.yml
version: '3.4'
services:
myapp:
image: myapp
build:
context: .
dockerfile: ./Dockerfile
ports:
- 8000:8000
- 443:443
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.9-buster
EXPOSE 8080
EXPOSE 443
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
RUN pip install mysqlclient
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# collect static files
#RUN python manage.py collectstatic --noinput
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn","--timeout", "0", "--bind", "0.0.0.0:8080", "myapp.wsgi" ]
Setting.py
# https stuff
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SESSION_SAVE_EVERY_REQUEST = True
SESSION_COOKIE_NAME = 'myappSession'
**Dockerfile**:
FROM java:8-jre-alpine
EXPOSE 9911
VOLUME /etc/sns
ENV AWS_DEFAULT_REGION=us-east-2 \
AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXX \
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
DB_PATH=/etc/sns/db.json
# aws-cli
RUN apk -Uuv add python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/*
ARG VERSION=0.3.0version: "3"
services:
aws-sns:
build: .
image: aws-sns-test:latest
volumes:
- ./config:/etc/sns
expose:
- 9911
ports:
- 9911:9911
hostname: aws-sns
enter code here
ADD https://github.com/s12v/sns/releases/download/$VERSION/sns-$VERSION.jar /sns.jar
CMD ["java", "-jar", "/sns.jar"]
**docker-compose.yml:**
version: "3"
services:
aws-sns:
build: .
image: aws-sns-test:latest
volumes:
- ./config:/etc/sns
expose:
- 9911
ports:
- 9911:9911
hostname: aws-sns
Later I also set the env variables using aws configure but this also didn't work.
aws configure
AWS Access Key ID [****************XXXX]:
AWS Secret Access Key [****************XXXX]:
Default region name [us-east-2]:
Default output format [None]:
I set these variables in sns conatiner( eg docker exec -it 39cb43921b31(conatinerid) sh) later as well but I didn't get desired output.
OUTPUT:
aws --endpoint-url=http://localhost:9911 sns create-topic --name local_sns
{
"TopicArn": "arn:aws:sns:us-east-1:123456789012:local_sns"
}
EXPECTED OUTPUT:
aws --endpoint-url=http://localhost:9911 sns create-topic --name local_sns
{
"TopicArn": "arn:aws:sns:us-east-2:123456789012:local_sns"
}
You can't change the region, as it is hard-coded into the source code:
val topic = Topic(s"arn:aws:sns:us-east-1:123456789012:$name", name)
The AWS credentials that you use have no effect, as they can be anything so that AWS CLI does not complain. You can also use --no-sign-request option for aws cli to eliminate the need for the credentials.
I want to control Amplify deployments from GitHub Actions because Amplify auto-build
doesn't provide a GitHub Environment
doesn't watch the CI for failures and will deploy anyways, or
requires me to duplicate the CI setup and re-run it in Amplify
didn't support running a cypress job out-of-the-box
Turn off auto-build (in the App settings / General / Branches).
Add the following script and job
scripts/amplify-deploy.sh
echo "Deploy app $1 branch $2"
JOB_ID=$(aws amplify start-job --app-id $1 --branch-name $2 --job-type RELEASE | jq -r '.jobSummary.jobId')
echo "Release started"
echo "Job ID is $JOB_ID"
while [[ "$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')" =~ ^(PENDING|RUNNING)$ ]]; do sleep 1; done
JOB_STATUS="$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')"
echo "Job finished"
echo "Job status is $JOB_STATUS"
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
steps:
- uses: actions/checkout#v2
- name: Deploy
run: ./scripts/amplify-deploy.sh xxxxxxxxxxxxx master
You could improve the script to fail if the release fails, add needed steps (e.g. lint, test), add a GitHub Environment, etc.
There's also amplify-cli-action but it didn't work for me.
Disable automatic builds:
Go to App settings > general in the AWS Amplify console and disable automatic builds there.
Go to App settings > Build Settings and create a web hook which is a curl command that will trigger a build.
Example: curl -X POST -d {} URL -H "Content-Type: application/json"
Save the URL in GitHub as a secret.
Add the curl script to the GitHub actions YAML script like this:
deploy:
runs-on: ubuntu-latest
steps:
- name: deploy
run: |
URL="${{ secrets.WEBHOOK_URL }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
Similar to answer 2 here, but I used tags instead.
Create an action like ci.yml, turn off auto-build on the staging & prod envs in amplify and create the webhook triggers.
name: CI-Staging
on:
release:
types: [prereleased]
permissions: read-all # This is required to read the secrets
jobs:
deploy-staging:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.STAGING_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
name: CI-production
on:
release:
types: [released]
permissions: read-all # This is required to read the secrets
jobs:
deploy-production:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.PRODUCTION_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
when I create amazon ubuntu instance from amazon web console and tries to log in to that instance using ssh from any remote computer I am able to log in but when I create ec2 instance using ansible aws.yml file and tries to do the same, I am unable to connect and got an error Permission denied (publickey) from every remote host except from that host in which I ran ansible script. Am I doing something wrong in my ansible file
Here is my ansiblle yml file
auth: {
auth_url: "",
# This should be your AWS Access Key ID
username: "AKIAJY32VWHYOFOR4J7Q",
# This should be your AWS Secret Access Key
# can be passed as part of cmd line when running the playbook
password: "{{ password | default(lookup('env', 'AWS_SECRET_KEY')) }}"
}
# These variable defines AWS cloud provision attributes
cluster: {
region_name: "us-east-1", #TODO Dynamic fetch
availability_zone: "", #TODO Dynamic fetch based on region
security_group: "Fabric",
target_os: "ubuntu",
image_name: "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*",
image_id: "ami-d15a75c7",
flavor_name: "t2.medium", # "m2.medium" is big enough for Fabric
ssh_user: "ubuntu",
validate_certs: True,
private_net_name: "demonet",
public_key_file: "/home/ubuntu/.ssh/fd.pub",
private_key_file: "/home/ubuntu/.ssh/fd",
ssh_key_name: "fabric",
# This variable indicate what IP should be used, only valid values are
# private_ip or public_ip
node_ip: "public_ip",
container_network: {
Network: "172.16.0.0/16",
SubnetLen: 24,
SubnetMin: "172.16.0.0",
SubnetMax: "172.16.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.15.0.0/24",
dns_service_ip: "172.15.0.4",
# the section defines preallocated IP addresses for each node, if there is no
# preallocated IPs, leave it blank
node_ips: [ ],
# fabric network node names expect to be using a clear pattern, this defines
# the prefix for the node names.
name_prefix: "fabric",
domain: "fabricnet",
# stack_size determines how many virtual or physical machines we will have
# each machine will be named ${name_prefix}001 to ${name_prefix}${stack_size}
stack_size: 3,
etcdnodes: ["fabric001", "fabric002", "fabric003"],
builders: ["fabric001"],
flannel_repo: "https://github.com/coreos/flannel/releases/download/v0.7.1/flannel-v0.7.1-linux-amd64.tar.gz",
etcd_repo: "https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz",
k8s_repo: "https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/",
go_ver: "1.8.3",
# If volume want to be used, specify a size in GB, make volume size 0 if wish
# not to use volume from your cloud
volume_size: 8,
# cloud block device name presented on virtual machines.
block_device_name: "/dev/vdb"
}
For Login:
For login using ssh I am doing these steps.
1- Download private key file.
2- chmod 600 private key.
3-ssh -vvv -i ~/.ssh/sshkeys.pem ubuntu#ec.compute-1.amazonaws.com .
I am getting error Permission denied (publickey)
You should be using the key that you created for connecting to AWS instance.
Got to EC2 dashboard and find instances and click on connect on the running instance that you need to ssh to.
It would be something like
ssh -i "XXX.pem" ubuntu#ec2-X-XXX-XX-XX.XX-XXX-2.compute.amazonaws.com
Save XXX.pem from security group to your machine.
Not the ssh keygen of your system