I'm trying to send push to my app using Google Cloud Scheduler:
gcloud beta scheduler jobs create http PUSH --schedule="0 * * * *" --uri="https://fcm.googleapis.com/fcm/send" --description="desc" --headers="Authorization: key=<AUTHKEY> --http-method="POST" --message-body="{\"to\":\"/topics/allDevices\",\"priority\":\"low\",\"data\":{\"success\":\"ok\"}}"
The result is always 401 Unauthorized. After issuing command:
gcloud beta scheduler jobs describe PUSH
I do not get these headers back:
description: desc
httpTarget:
body: eyJ0byI6Ii90b3BpY3MvYWxsnByaW9yaXR5IjoRGV2aWNlcyIsIiaGlnaCIsImRhdGEiOnsic3VjY2VzcyI6Im9rIn19 <--- THIS IS WEIRD
headers:
Content-Type: application/octet-stream
User-Agent: Google-Cloud-Scheduler
httpMethod: POST
uri: https://fcm.googleapis.com/fcm/send
lastAttemptTime: '2018-11-07T20:32:37.657408Z'
name: projects/..../locations/europe-west1/jobs/PUSH
retryConfig:
maxBackoffDuration: 3600s
maxDoublings: 16
maxRetryDuration: 0s
minBackoffDuration: 5s
schedule: 0 * * * *
scheduleTime: '2018-11-07T21:00:00.681498Z'
state: ENABLED
status:
code: 16
timeZone: Etc/UTC
userUpdateTime: '2018-11-07T20:29:15Z'
The first question about body:
body:eyJ0byI6Ii90b3BpY3MvYWxsnByaW9yaXR5IjoRGV2aWNlcyIsIiaGlnaCIsImRhdGEiOnsic3VjY2VzcyI6Im9rIn19
<--- THIS IS WEIRD
This is the base64 encoding of
{\"to\":\"/topics/allDevices\",\"priority\":\"low\",\"data\":{\"success\":\"ok\"}}
Google is taking your --message-body and encoding it in base64.
Next regarding the header issue. You have a several errors in your '--headers`.
--headers="Authorization: key=AUTHKEY
You are missing a quote mark after AUTHKEY. I will assume that this issue is just editing mistake creating the question. (Note I could not figure out how to include the less-than and greater-than characters in this response).
However, the syntax for --headers is wrong. The --headers expects KEY=VALUE, not KEY:VALUE. In this example the KEY is Authorization and the VALUE is key=AUTHKEY.
--headers="Authorization=key=AUTHKEY"
Related
I am trying to extract some specific data from the postgresql logs using the grok parsing rules in datadog. I am trying to extract the following in json format from the logs below
{
dbuser {
AROAXXXXXXXXXXXXXXXXX : username
}
}
Logs from which I am trying to extract the above information
2022-11-11 09:09:15 UTC:10.116.0.244(57888):AROAXXXXXXXXXXXXXXXXX:username#database_name:[592]:LOG: AUDIT: SESSION,3016,1,READ,SELECT,,,"/*pga4dash*/
2022-11-11 09:20:53 UTC:10.116.0.244(57946):AROAXXXXXXXXXXXXXXXXX:username#database_name:[7696]:LOG: pam_authenticate failed: Permission denied
2022-11-11 09:27:02 UTC:10.116.0.244(57984):AROAXXXXXXXXXXXXXXXXX:username#database_name:[8328]:LOG: AUDIT: SESSION,1,1,ROLE,ALTER ROLE,,,ALTER USER app_user SET pgaudit.log TO 'NONE';,<not logged>
2022-11-11 09:21:57 UTC:10.117.0.98(44764):AROAXXXXXXXXXXXXXXXXX:username#database_name:[2873]:FATAL: pg_hba.conf rejects connection for host "10.117.0.98", user "AROAXXXXXXXXXXXXXXXXX:username", database "database_name", SSL off
* Trying 127.0.0.1:1108...
* Connected to rdsauthproxy (127.0.0.1) port 1108 (#0)
> POST /authenticateRequest HTTP/1.1
Host: rdsauthproxy:1108
Accept: */*
Content-Length: 1884
Content-Type: multipart/form-data; boundary=------------------------1b12ee5d61245d84
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
<
* Connection #0 to host rdsauthproxy left intact
What I have tried and achieved so far
This is what I have tried so far in terms of generalisation which should work on all logs but this gives me no output.
%{date("yyyy-MM-dd HH:mm:ss z"):}\:%{ipv4:}\(%{number:}\)\:%{data:dbuser:keyvalue(":")}
If I use the following then it gives me the desired output but only works for first pattern of log that I have mentioned above.
%{date("yyyy-MM-dd HH:mm:ss z"):}\:%{ipv4:}\(%{number:}\)\:%{data:dbuser:keyvalue(":")}:\[592\]\:LOG\:\s+AUDIT\:\s+SESSION,%{integer:},1,READ,SELECT,,,\"%{notSpace:}%{data}
If there is a way to ignore all the logs and only just extract the exact match then please help me out.
So I was able to figure out the solution for the above question. The following is the parse rule I used which help me achieve what I wanted.
%{date("yyyy-MM-dd HH:mm:ss z"):}\:%{ipv4:}\(%{number:}\)\:%{word}:%{data:database.username}:%{data}
I'm trying to Execute the AWS step function from API Gateway, It's working as expected.
Whenever I'm passing the input, statemachinearn(stepfunction name to execute) It's triggering the step function.
But It's still returning the status code 200, whenever it's not able to find the stepfunction, I want to return the status code 404 if the apigateway not found that stepfunction.
Could you please help me on that
Response:
Status: 200ok
Expected:
Status: 404
Thanks,
Harika.
As per the documentation StartExecution API call do return 400 Bad Request for non existent statemachine which is correct as RESTful API standard.
StateMachineDoesNotExist
The specified state machine does not exist.
HTTP Status Code: 400
From the RESTful API point of view, endpoint /execution/(which I created in API Gateway for the integration setup) is a resource, no matter it accepts GET or POST or something else. 404 is only appropriate when the resource /execution/ itself does not exist. If /execution/ endpoint exists, but its invocation failed (no matter what the reasons), the response status code must be something other than 404.
So in the case of the returned response(200) for POST call with non-existent statemachine it is correct. But when API Gateway tried to make the call to non-existent statemachine it got 404 from StartExecution api call which it eventually wrapped into a proper message instead of returning 404 http response.
curl -s -X POST -d '{"input": "{}","name": "MyExecution17","stateMachineArn": "arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine"}' https://123456asdasdas.execute-api.eu-central-1.amazonaws.com/v1/execution|jq .
{
"__type": "com.amazonaws.swf.service.v2.model#StateMachineDoesNotExist",
"message": "State Machine Does Not Exist: 'arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine1'"
}
Let's say you create another MethodResponse where you can provide an exact HTTP Status Code in your case 404 which you want to return and you do an Integration Response where you have to choose the Method Response by providing either Exact HTTP Responce Code(400 -> Upstream response from the **StartExecution** API Call) OR a Regex -> (4\{d}2) matching all the 4xx errors.
In that case you will be giving 404 for all the responses where the upstream error 4xx StartExecution Errors
ExecutionAlreadyExists -> 400
ExecutionLimitExceeded -> 400
InvalidArn -> 400
InvalidExecutionInput -> 400
InvalidName -> 400
StateMachineDeleting -> 400
StateMachineDoesNotExist -> 400
Non Existent State Machine:
curl -s -X POST -d '{"input": "{}","name": "MyExecution17","stateMachineArn": "arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine1"}' https://123456asdasdas.execute-api.eu-central-1.amazonaws.com/v1/execution|jq .
< HTTP/2 404
< date: Sat, 30 Jan 2021 14:12:16 GMT
< content-type: application/json
...
{
"__type": "com.amazonaws.swf.service.v2.model#StateMachineDoesNotExist",
"message": "State Machine Does Not Exist: 'arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine1'"
}
Execution Already Exists
curl -s -X POST -d '{"input": "{}","name": "MyExecution17","stateMachineArn": "arn:aws:states:eu-central-1:1234567890:stateMachine:mystatemachine"}' https://123456asdasdas.execute-api.eu-central-1.amazonaws.com/v1/execution|jq .
* We are completely uploaded and fine
< HTTP/2 404
< date: Sat, 30 Jan 2021 14:28:27 GMT
< content-type: application/json
{
"__type": "com.amazonaws.swf.service.v2.model#ExecutionAlreadyExists",
"message": "Execution Already Exists: 'arn:aws:states:eu-central-1:1234567890:execution:mystatemachine:MyExecution17'"
}
Which I think will be misleading.
I have an gitlab ci yaml file. and 2 jobs. My .gitlab-ci.yaml file is:
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- trigger_IT_service
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
trigger_IT_service_job:
stage: trigger_IT_service
script:
- 'curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer'
And It's my trigger_IT_service job report:
Running on DIGITALIZATION...
00:00
Fetching changes with git depth set to 50...
00:05
Reinitialized existing Git repository in D:/GitLab-Runner/builds/c11pExsu/0/personalname/newproject/.git/
Checking out 24be087a as master...
Removing Output/
git-lfs/2.5.2 (GitHub; windows amd64; go 1.10.3; git 8e3c5c93)
Skipping Git submodules setup
$ curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer
00:02
StatusCode : 200
StatusDescription : 200
Content : {"status":200,"message":"SAP transfer started. Please
check in db","errorCode":0,"timestamp":"2020-03-25T13:53:05
.722+0300","responseObject":null}
RawContent : HTTP/1.1 200 200
Keep-Alive: timeout=10
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Mar 2020 10:53:05 GMT
Server: Apache
I have to control the this report "Content" part in gitlab ci yaml
If "message" is "SAP transfer started. Please check in db" the pipeline should pass otherwise must be failed.
Actually my question is:
how to parse Http json response and fail or pass job based on that
Thank you for all your helps.
Best way would be to install some tool to parse json and use it, different examples here
Given json example from comment:
{
"status": 200,
"message": "SAP transfer started. Please check in db",
"errorCode": 0,
"timestamp": "2020-03-25T17:06:43.430+0300",
"responseObject": null
}
If you can install python3 on your runner you could achieve it all with script:
import requests; # note this might require additional install with pip install requests
message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']
if message != 'SAP transfer started. Please check in db':
print('Invalid message: ' + message)
exit(1)
else:
print('Message ok')
So trigger_IT_service stage in your yaml would be:
trigger_IT_service_job:
stage: trigger_IT_service
script: >
python -c "import requests; message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']; (print('Invalid message: ' + message), exit(1)) if message != 'SAP transfer started. Please check in db' else (print('Message ok'), exit(0))"
On AWS ECS or AWS CodeBuild etc, when trying to retrieve as credentials using:
http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
suddenly since Feb 7, 2019 - I got 404 not found !
curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
The expected result should be a valid json of the AWS Credentials session
After short investigation:
I found that $AWS_CONTAINER_CREDENTIALS_RELATIVE_URI already starts by a slash '/'
[e.g AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/xxxx-xxxx-xxxx-xxxx-xxxxx]
Solution: just remove the slash after the IP.*
e.g http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
TL;DR;
I run curl with -v on AWS CodeBuild:
> GET //v2/credentials/xxxx-xxxx-xxxx-xxxx-xxxxx HTTP/1.1
> Host: 169.254.170.2
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
Conclusion: since Feb 6 or 7 2019, AWS add a strict check and broke the request with 404
for double slash //
I'm trying to set up a cronjob at Elastic Beanstalk. The task is being scheduled. For testing purposes it should run every minute... However it is not working. It is a Django app. The app is running in two Environments, one is the worker and the other one is "hosting" the application.
This part is working. The command is running but it's not being executed (the files are not being deleted).
Here is views.py:
#login_required
def delete_expired_files(request):
users = DemoUser.objects.all()
for user in users:
documents = Document.objects.filter(owner=user.id)
if documents:
for doc in documents:
now = timezone.now()
if now >= doc.date_published + timedelta(days = doc.owner.group.valid_time):
doc.delete()
return redirect("user_home")
cron.yml:
version: 1
cron:
- name: "delete_expired_files"
url: "http://networksapp.elasticbeanstalk.com/networks_app/delete_expired_files"
schedule: "* * * * *"
However, it prints this on the log file at the access_log part :
"POST /myapp/management/commands/delete_expired_files HTTP/1.1" 500 124709 "-" "aws-sqsd/2.0"
This is the log file I am accessing so far:
Log file content
Why is it? How can I fix it?
Thank you so much.