Injecting HLS stream into AWS Elemental MediaPackage from WOWZA - amazon-web-services

I'm following https://github.com/WowzaMediaSystems/wse-example-pushpublish-hls in order to inject an HLS stream from a Wowza into a AWS MediaPackage channel.
My PushPublishProfilesCustom.xml
<?xml version="1.0" encoding="UTF-8"?>
<Root>
<PushPublishProfiles>
<PushPublishProfile>
<Name>cupertino-file</Name>
<Protocol>HTTP</Protocol>
<BaseClass>com.mycompany.wms.example.pushpublish.protocol.cupertino.PushPublishHTTPCupertinoFileHandler</BaseClass>
<Implementation>
<Name>Cupertino File</Name>
</Implementation>
<HTTPConfiguration>
</HTTPConfiguration>
<Properties>
</Properties>
</PushPublishProfile>
<PushPublishProfile>
<Name>cupertino-http</Name>
<Protocol>HTTP</Protocol>
<BaseClass>com.mycompany.wms.example.pushpublish.protocol.cupertino.PushPublishHTTPCupertinoHTTPHandler</BaseClass>
<Implementation>
<Name>Cupertino HTTP</Name>
</Implementation>
<HTTPConfiguration>
</HTTPConfiguration>
<Properties>
</Properties>
</PushPublishProfile>
</PushPublishProfiles>
</Root>
My #APP_NAME#/PushPublishMap.txt (I'm adding EndOfLines to do reading easier)
MediaPackage={
"entryName":"MediaPackage",
"profile":"cupertino-http",
"streamName":"MediaPackageStream",
"destinationName":"MediaPackage0",
"host":"xxxx.mediapackage.eu-west-1.amazonaws.com/in/v2/xxxx/xxxx/channel",
"port":"443",
"sendSSL":"true",
"username":"xxxx,
"password":"xxxx",
"http.path":"hls"
}
When I'm sending data to my wowza ( rtsp://X.X.X.X:1935/#APP_NAME#/MediaPackage ) I start to see logs like this...
WARN server comment 2020-06-02 09:23:49 - - - - - 4325.922 - - - - - - - - PushPublishHTTPCupertinoHTTPHandler.outputSend([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Found 79 segments to send
WARN server comment 2020-06-02 09:23:49 - - - - - 4325.922 - - - - - - - - PushPublishHTTPCupertinoHTTPHandler.outputSend([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Found 76 segments to delete
ERROR server comment 2020-06-02 09:23:49 - - - - - 4325.934 - - - - - - - - PushPublishHTTPCupertinoHTTPHandler.outputSend([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Send media segment. rendition: AUDIOVIDEO chunkId:77 uri:pdmekxw9/media_77.aac result:FAILURE
So, HLS Push Publishing is sending chunks... but without success
I have read https://www.wowza.com/docs/how-to-configure-apple-hls-packetization-cupertinostreaming but I don't know what values I may change.
What am I doing wrong? Any ideas?
EDIT: More logs
2020-06-02 14:32:39 UTC comment server INFO 200 - PushPublishHTTPCupertinoHTTPHandler.createOutputItem([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) chunkCount:10, chunkStartIndex:201, lastChunkIndex:209 - - -22856.082 - - - - - - - - - - - - - - - - - - - - - - - - -
2020-06-02 14:32:39 UTC comment server INFO 200 - PushPublishHTTPCupertinoHTTPHandler.createOutputItem([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) playlistChunkCount:3, playlistChunkStartIndex:208 - - - 22856.082 - - - - - - - - - - - - - - - - - - - - - - - - -
2020-06-02 14:32:39 UTC comment server INFO 200 - PushPublishHTTPCupertinoHTTPHandler.createOutputItem([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) New chunk: chunkRendition:AUDIOVIDEO, chunkId:210, chunkIndex:2 - -- 22856.082 - - - - - - - - - - - - - - - - - - - - - - - - -
2020-06-02 14:32:39 UTC comment server INFO 200 - PushPublishHTTPCupertinoHTTPHandler.createOutputItem([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Marking MediaSegmentModel: pcnod08j/media_207.aac for deletion - -- 22856.083 - - - - - - - - - - - - - - - - - - - - - - - - -
2020-06-02 14:32:39 UTC comment server WARN 200 - PushPublishHTTPCupertinoHTTPHandler.outputSend([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Found 32 segments to send - - - 22856.083 - - -- - - - - - - - - - - - - - - - - - - - - -
2020-06-02 14:32:39 UTC comment server WARN 200 - PushPublishHTTPCupertinoHTTPHandler.outputSend([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Found 29 segments to delete - - - 22856.083 - -- - - - - - - - - - - - - - - - - - - - - - -
2020-06-02 14:32:39 UTC comment server ERROR 500 - PushPublishHTTPCupertinoHTTPHandler.outputSend([MediaPackage] TV/_definst_/MediaPackage->MediaPackageStream) Send media segment. rendition: AUDIOVIDEO chunkId:208 uri:pcnod08j/media_208.aac result:FAILURE - - - 22856.097

Related

KeyMaterial not found in SigningIdentityInfo

I am creating a Hyperledger fabric network with 4 node docker swarm node.
I want to create 5 peers, 1 org, 4 orderer, 1 ca.
in node 1. ca, cli, 2 peer(peer0.org1,peer1.org1)
in node 2. 2 orderer 2 peer(peer2.org1,peer3.org1)
in node 3. 2 orderer, 1 peer (peer4.org1)
in node 4. 1 peer (peer5.org1)
When I run docker stack deploy -c docker-compose.yaml hlf.
In manager node 1 peer 2 orderer 1 ca and couchdb are up and running but in other nodes are not. Only couchdb's are runing in other nodes.
When I run docker service logs serviceId. I saw below error.
Cannot run peer because error when setting up MSP of type bccsp from directory /etc/hyperledger/crypto/peer/msp: KeyMaterial not found in SigningIdentityInfo
I created ca msp peers tlsca users files inside org1.example.com in node 1 and copied and pasted to other nodes with same directory.
I couldn't understand what is wrong with this problem. Other peers are same just port and peer number is changed.
peer4org1examplecom:
deploy:
placement:
constraints:
- node.labels.name == worker3
container_name: peer1.org1.example.com
image: hyperledger/fabric-peer:2.1
# extends:
# file: base.yaml
# service: peer-base
environment:
- FABRIC_LOGGING_SPEC=info
- ORDERER_GENERAL_LOGLEVEL=info
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test
- CORE_PEER_ID=peer4.org1.example.com
- CORE_PEER_ADDRESS=peer4.org1.example.com:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer4.org1.example.com:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
# Exposed for discovery Service
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer4.org1.example.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb4:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
#extends base.yaml
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_default
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
ports:
- 11051:9051
volumes:
- /home/ubuntu/hlf-docker-swarm/test-network/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer4.org1.example.com/msp:/etc/hyperledger/crypto/peer/msp
- /home/ubuntu/hlf-docker-swarm/test-network/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer4.org1.example.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./:/etc/hyperledger/
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
networks:
test:
aliases:
- peer4.org1.example.com

Traefik, Django, Angular, Docker - Mixed Content

I am trying to set up a Traefik to serve my Django API over HTTPS, but not to expose it to the outside network/world.
My docker-compose:
---
version: "3.6"
services:
backend_prod:
image: $BACKEND_IMAGE
restart: always
environment:
- DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY
- DATABASE_ENGINE=$DATABASE_ENGINE
- DATABASE_NAME=$DATABASE_NAME
- DATABASE_USER=$DATABASE_USER
- DATABASE_PASSWORD=$DATABASE_PASSWORD
- DATABASE_HOST=$DATABASE_HOST
- DATABASE_PORT=$DATABASE_PORT
- PRODUCTION=TRUE
security_opt:
- no-new-privileges:true
container_name: backend_prod
networks:
- traefik_default
calendar_frontend_prod:
image: $FRONTEND_IMAGE
restart: always
security_opt:
- no-new-privileges:true
container_name: frontend_prod
environment:
- PRODUCTION=TRUE
networks:
- traefik_default
labels:
- "traefik.enable=true"
- "traefik.http.routers.frontend.entrypoints=webs"
- "traefik.http.routers.frontend.rule=Host(`mywebsite.org`)"
- "traefik.http.routers.frontend.tls.certresolver=letsencrypt"
- "traefik.http.services.frontend.loadbalancer.server.port=4200"
- "traefik.http.services.frontend.loadbalancer.server.scheme=http"
networks:
traefik_default:
external: true
Inside my frontend files, I got it set up like it:
export const environment = {
production: true,
apiUrl: 'http://backend_prod'
};
After that when I got to mywebsite.org and look at networking I am seeing:
polyfills.js:1 Mixed Content: The page at 'https://mywebsite.org/auth/login' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://backend_prod/api/users/login'. This request has been blocked; the content must be served over HTTPS.
I was trying to add to backend_prod service below lines:
- "traefik.enable=true"
- "traefik.http.routers.backend_prod.entrypoints=webs"
- "traefik.http.routers.backend_prod.rule=Host(`be.localhost`)"
- "traefik.http.services.backend_prod.loadbalancer.server.port=80"
- "traefik.http.services.backend_prod.loadbalancer.server.scheme=http"
but then I was getting from frontend an error: https//be.localhost Connection Refused.
How could I solve this problem?

Flask Lightsail logs receiving requests every 5 seconds

I've deployed a Flask application to Lightsail via a tutorial provided on the AWS website.
Everything is working as expected in terms of my frontend communicating with my backend, but as I try to debug and access the container logs via the Lightsail console, I notice that I'm currently receiving requests every 5 seconds. The logs look as follows:
[4/May/2022:06:48:30] 172.26.7.217 - - [04/May/2022 06:48:30] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:30] 172.26.17.192 - - [04/May/2022 06:48:30] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.47.225 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.57.133 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.7.217 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.17.192 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.47.225 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.57.133 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.7.217 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.17.192 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.47.225 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.57.133 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.7.217 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.17.192 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.47.225 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.57.133 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.7.217 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.17.192 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.47.225 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.57.133 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.7.217 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.17.192 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.47.225 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.57.133 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.7.217 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.17.192 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.47.225 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.57.133 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.7.217 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.17.192 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
There's a few things confusing here to me:
I don't have the / specifically defined in my Flask application - is this necessary? It's clear the 404s are coming because the route is not defined, but I don't have any code from my React frontend that explicitly makes a request to this route. I'm not sure if I'm supposed to just create a / route on my Flask application that more or less does nothing
I see that the requests are coming in every 5 seconds - could this be some sort of health check? I'm certainly not visiting my frontend every 5 seconds. I do have Nginx set up on a Lightsail instance that's running my frontend and I'm not sure if that might have something to do with it
Any help is appreciated, thank you!

OIDC Redirect URI Error in Dockerized Django

I'm running two applications using docker-compose. Each application has a bunch of containers. The intention is for App A (django app) to host the OIDC provider, while App B (some other app) will authenticate users by calling the App A API.
I'm using the django-oidc-provider library (https://django-oidc-provider.readthedocs.io/en/latest/index.html)
I've already configured the OIDC integration on both sides. However, every time App B redirects to App A, I hit the following error:
Redirect URI Error
The request fails due to a missing, invalid, or mismatching redirection URI (redirect_uri).
Even though the redirect_uri matches exactly on both sides.
Here's my docker-compose.yml:
version: '3'
networks:
default:
external:
name: datahub-gms_default
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: dqt
container_name: dqt
hostname: dqt
platform: linux/x86_64
depends_on:
- postgres
volumes:
- .:/app:z
environment:
- DJANGO_READ_DOT_ENV_FILE=true
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/local/postgres/Dockerfile
image: postgres
container_name: postgres
hostname: postgres
volumes:
- dqt_local_postgres_data:/var/lib/postgresql/data:Z
- dqt_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
broker:
container_name: broker
depends_on:
- zookeeper
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
- KAFKA_HEAP_OPTS=-Xms256m -Xmx256m
hostname: broker
image: confluentinc/cp-kafka:5.4.0
ports:
- 29092:29092
- 9092:9092
datahub-actions:
depends_on:
- datahub-gms
environment:
- GMS_HOST=datahub-gms
- GMS_PORT=8080
- KAFKA_BOOTSTRAP_SERVER=broker:29092
- SCHEMA_REGISTRY_URL=http://schema-registry:8081
- METADATA_AUDIT_EVENT_NAME=MetadataAuditEvent_v4
- METADATA_CHANGE_LOG_VERSIONED_TOPIC_NAME=MetadataChangeLog_Versioned_v1
- DATAHUB_SYSTEM_CLIENT_ID=__datahub_system
- DATAHUB_SYSTEM_CLIENT_SECRET=JohnSnowKnowsNothing
- KAFKA_PROPERTIES_SECURITY_PROTOCOL=PLAINTEXT
hostname: actions
image: public.ecr.aws/datahub/acryl-datahub-actions:${ACTIONS_VERSION:-head}
datahub-frontend-react:
container_name: datahub-frontend-react
depends_on:
- datahub-gms
environment:
- DATAHUB_GMS_HOST=datahub-gms
- DATAHUB_GMS_PORT=8080
- DATAHUB_SECRET=YouKnowNothing
- DATAHUB_APP_VERSION=1.0
- DATAHUB_PLAY_MEM_BUFFER_SIZE=10MB
- JAVA_OPTS=-Xms512m -Xmx512m -Dhttp.port=9002 -Dconfig.file=datahub-frontend/conf/application.conf
-Djava.security.auth.login.config=datahub-frontend/conf/jaas.conf -Dlogback.configurationFile=datahub-frontend/conf/logback.xml
-Dlogback.debug=false -Dpidfile.path=/dev/null
- KAFKA_BOOTSTRAP_SERVER=broker:29092
- DATAHUB_TRACKING_TOPIC=DataHubUsageEvent_v1
- ELASTIC_CLIENT_HOST=elasticsearch
- ELASTIC_CLIENT_PORT=9200
- AUTH_OIDC_ENABLED=true
- AUTH_OIDC_CLIENT_ID=778948
- AUTH_OIDC_CLIENT_SECRET=some-client-secret
- AUTH_OIDC_DISCOVERY_URI=http://dqt:8000/openid/.well-known/openid-configuration/
- AUTH_OIDC_BASE_URL=http://datahub:9002/
hostname: datahub
image: linkedin/datahub-frontend-react:${DATAHUB_VERSION:-head}
ports:
- 9002:9002
datahub-gms:
container_name: datahub-gms
depends_on:
- mysql
environment:
- DATASET_ENABLE_SCSI=false
- EBEAN_DATASOURCE_USERNAME=datahub
- EBEAN_DATASOURCE_PASSWORD=datahub
- EBEAN_DATASOURCE_HOST=mysql:3306
- EBEAN_DATASOURCE_URL=jdbc:mysql://mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8
- EBEAN_DATASOURCE_DRIVER=com.mysql.jdbc.Driver
- KAFKA_BOOTSTRAP_SERVER=broker:29092
- KAFKA_SCHEMAREGISTRY_URL=http://schema-registry:8081
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- GRAPH_SERVICE_IMPL=elasticsearch
- JAVA_OPTS=-Xms1g -Xmx1g
- ENTITY_REGISTRY_CONFIG_PATH=/datahub/datahub-gms/resources/entity-registry.yml
- MAE_CONSUMER_ENABLED=true
- MCE_CONSUMER_ENABLED=true
hostname: datahub-gms
image: linkedin/datahub-gms:${DATAHUB_VERSION:-head}
ports:
- 8080:8080
volumes:
- ${HOME}/.datahub/plugins:/etc/datahub/plugins
elasticsearch:
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms256m -Xmx256m -Dlog4j2.formatMsgNoLookups=true
healthcheck:
retries: 4
start_period: 2m
test:
- CMD-SHELL
- curl -sS --fail 'http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=0s' || exit 1
hostname: elasticsearch
image: elasticsearch:7.9.3
mem_limit: 1g
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
elasticsearch-setup:
container_name: elasticsearch-setup
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- ELASTICSEARCH_PROTOCOL=http
hostname: elasticsearch-setup
image: linkedin/datahub-elasticsearch-setup:${DATAHUB_VERSION:-head}
kafka-setup:
container_name: kafka-setup
depends_on:
- broker
- schema-registry
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_BOOTSTRAP_SERVER=broker:29092
hostname: kafka-setup
image: linkedin/datahub-kafka-setup:${DATAHUB_VERSION:-head}
mysql:
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_bin
container_name: mysql
environment:
- MYSQL_DATABASE=datahub
- MYSQL_USER=datahub
- MYSQL_PASSWORD=datahub
- MYSQL_ROOT_PASSWORD=datahub
hostname: mysql
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ../mysql/init.sql:/docker-entrypoint-initdb.d/init.sql
- mysqldata:/var/lib/mysql
mysql-setup:
container_name: mysql-setup
depends_on:
- mysql
environment:
- MYSQL_HOST=mysql
- MYSQL_PORT=3306
- MYSQL_USERNAME=datahub
- MYSQL_PASSWORD=datahub
- DATAHUB_DB_NAME=datahub
hostname: mysql-setup
image: acryldata/datahub-mysql-setup:head
schema-registry:
container_name: schema-registry
depends_on:
- zookeeper
- broker
environment:
- SCHEMA_REGISTRY_HOST_NAME=schemaregistry
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
hostname: schema-registry
image: confluentinc/cp-schema-registry:5.4.0
ports:
- 8081:8081
zookeeper:
container_name: zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
hostname: zookeeper
image: confluentinc/cp-zookeeper:5.4.0
ports:
- 2181:2181
volumes:
- zkdata:/var/opt/zookeeper
volumes:
dqt_local_postgres_data: {}
dqt_local_postgres_data_backups: {}
esdata: null
mysqldata: null
zkdata: null
In the above, container datahub-frontend-react is supposed to integrate into container dqt for the OIDC authentication.
The docker log doesn't show any exceptions, and the http code is 200:
dqt | [28/Feb/2022 10:43:43] "GET /openid/.well-known/openid-configuration/ HTTP/1.1" 200 682
dqt | [28/Feb/2022 10:43:44] "GET /openid/authorize?response_type=code&redirect_uri=http%3A%2F%2Fdatahub%3A9002%2F%2Fcallback%2Foidc&state=9Fj1Bog-ZN8fhN2kufWng2fRGaqCYnkMz6n3yKxPowo&client_id=778948&scope=openid+profile+email HTTP/1.1" 200 126
Here's the redirect_uri configuration in django admin:
I'm suspecting it could be related to the fact that they are different containers with different hostnames (I don't know what to do about that).
What could be the root cause of this issue?
Your log shows that the app is redirecting with this login URL, with two %2F characters, so the URL used by the app is different to that configured:
http://datahub:9002//callback/oidc
INTERNAL AND EXTERNAL URLs
Not sure if it will work once you resolve that though, since the callback URL looks like a Docker Compose internal URL, that the browser will be unable to reach. Aim to return a URL such as this instead:
http://localhost:9002/callback/oidc
One option that can be useful to make URLs more understandable during development, and to plan the real deployment, is to add custom host names to your computer's hosts file. You can then login via URLs such as http://www.myapp.com, which I find clearer.
See these resources for something to compare against, which describe a setup with both internal and external URLs.
Custom Hosts
Docker Compose Example

CloudWatch Log group missing although CloudWatch agent is working

I can't see the Log group defined by Cloud Watch agent on my EC2 instance
Also, the default log group /var/log/messages is not visible.
I can't see these logs also on root account.
I have other log groups configured and visible.
I have the following setup:
Amazon Linux
AMI managed role attached to instance: CloudWatchAgentServerPolicy
Agent installed via awslogs - https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
Agent started successfully
No errors in /var/log/awslogs.log. Looks like working normally. Log below.
Configuration done via /etc/awslogs/config/FlaskAppAccessLogs.conf
Instance has outbound access to internet
Instance security groups allows all outbound traffic
Any ideas what to check or what can be missing?
/etc/awslogs/config/FlaskAppAccessLogs.conf:
cat /etc/awslogs/config/FlaskAppAccessLogs.conf
[/var/log/nginx/access.log]
initial_position = start_of_file
file = /var/log/nginx/access.log
datetime_format = %d/%b/%Y:%H:%M:%S %z
buffer_duration = 5000
log_group_name = FlaskApp-Frontends-access-log
log_stream_name = {instance_id}
/var/log/awslogs.log
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Loading additional configs from /etc/awslogs/config/FlaskAppAccessLogs.conf
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to use gzip encoding.
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Missing or invalid value for queue_size config. Defaulting to use 10
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Using default logging configuration.
2019-01-05 17:50:21,544 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting publisher for [c17fae93047ac481a4c95b578dd52f94, /var/log/messages]
2019-01-05 17:50:21,550 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting reader for [c17fae93047ac481a4c95b578dd52f94, /var/log/messages]
2019-01-05 17:50:21,551 - cwlogs.push.reader - INFO - 24838 - Thread-4 - Start reading file from 0.
2019-01-05 17:50:21,563 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting publisher for [8ff79b6440ef7223cc4a59f18e5f3aef, /var/log/nginx/access.log]
2019-01-05 17:50:21,587 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting reader for [8ff79b6440ef7223cc4a59f18e5f3aef, /var/log/nginx/access.log]
2019-01-05 17:50:21,588 - cwlogs.push.reader - INFO - 24838 - Thread-6 - Start reading file from 0.
2019-01-05 17:50:27,838 - cwlogs.push.publisher - WARNING - 24838 - Thread-5 - Caught exception: An error occurred (ResourceNotFoundException) when calling the PutLogEvents operation: The specified log group does not exist.
2019-01-05 17:50:27,839 - cwlogs.push.batch - INFO - 24838 - Thread-5 - Creating log group FlaskApp-Frontends-access-log.
2019-01-05 17:50:27,851 - cwlogs.push.publisher - WARNING - 24838 - Thread-3 - Caught exception: An error occurred (ResourceNotFoundException) when calling the PutLogEvents operation: The specified log group does not exist.
2019-01-05 17:50:27,851 - cwlogs.push.batch - INFO - 24838 - Thread-3 - Creating log group /var/log/messages.
2019-01-05 17:50:27,966 - cwlogs.push.batch - INFO - 24838 - Thread-5 - Creating log stream i-0d7e533f67870ff8d.
2019-01-05 17:50:27,980 - cwlogs.push.batch - INFO - 24838 - Thread-3 - Creating log stream i-0d7e533f67870ff8d.
2019-01-05 17:50:28,077 - cwlogs.push.publisher - INFO - 24838 - Thread-5 - Log group: FlaskApp-Frontends-access-log, log stream: i-0d7e533f67870ff8d, queue size: 0, Publish batch: {'skipped_events_count': 0, 'first_event': {'timestamp': 1546688052000, 'start_position': 0L, 'end_position': 161L}, 'fallback_events_count': 0, 'last_event': {'timestamp': 1546708885000, 'start_position': 4276L, 'end_position': 4468L}, 'source_id': '8ff79b6440ef7223cc4a59f18e5f3aef', 'num_of_events': 24, 'batch_size_in_bytes': 5068}
Status of awslogs
sudo service awslogs status
awslogs (pid 25229) is running...
IAM role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*"
}
]
}
It's seems that posting a question may quickly help to find an answer.
There is additional configuration in which i have made typo:
sudo cat /etc/awslogs/awscli.conf
[plugins]
cwlogs = cwlogs
[default]
region = us-west-1
As described above the logs are delivered to us-west-1 region.
I was checking us-west-2 :)