Send email with Microsoft Flow when Power BI alert is triggered - powerbi

I am trying to build a flow that sends an email to me, when a Power BI alert is triggered. I have build the flow, and now trying the test option.
This gives me a Status code error 429
Addional details:
Headers
Retry-After: 600
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Frame-Options: deny
X-Content-Type-Options: nosniff
RequestId: ad5eb81f-a02d-4edd-b0c2-964cef662d01
Timing-Allow-Origin: *
x-ms-apihub-cached-response: false
Cache-Control: no-store, must-revalidate, no-cache
Date: Thu, 28 Mar 2019 12:35:42 GMT
Content-Length: 254
Content-Type: application/json
Body
{
"error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"pbi.error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"parameters": {},
"details": [],
"exceptionCulprit": 1
}
}
}
I noticed this 429 is caused by too many requests, but I do not understand this, since I only have 1 alert, and this is a very simple Flow thats connected to this 1 alert, and should then send an email.

In general Error 429 means you have exceeded the limit of triggers per period (probably 60 seconds according to https://learn.microsoft.com/en-gb/connectors/powerbi/ ). You should find these parameters in Peek code tool.
My suggestion is to check how many alerts for the data tracked you had in Power BI service. Too low limit might be the answer.

I got the same error.
It was appearing when testing manually.
When I changed the testing to "Automatic" the error changed and it was clear that the "Send an e-mail" step caused the issue.
It turned out that the second step needed to change to Outlook - Send an email (V2) step.
It was really confusing as the MicrosoftFlowCheckAlertStatusEndpointThrottled was irrelevant and it was not the real issue!

Related

Download Limit of AWS API Gateway

We have service which is used to download time series data from influxdb .We are not manipulating influx response , after updating some meta information , we push the records as such.
So there is no content length attached to response.
Want to give this service via Amazon API Gateway. Is it possible to integrate such a service with API gateway , mainly is there any limit on response size .Service not waiting for whole query results to come , but will API gateway do the same or it will wait for the whole data to be wrote to output stream.
When I tried , I observed content-length header being added by API Gateway.
HTTP/1.1 200 OK
Date: Tue, 26 Apr 2022 06:03:31 GMT
Content-Type: application/json
Content-Length: 3024
Connection: close
x-amzn-RequestId: 41dfebb4-f63e-43bc-bed9-1bdac5759210
X-B3-SpanId: 8322f100475a424a
x-amzn-Remapped-Connection: keep-alive
x-amz-apigw-id: RLKwCFztliAFR2Q=
x-amzn-Remapped-Server: akka-http/10.1.8
X-B3-Sampled: 0
X-B3-ParentSpanId: 43e304282e2f64d1
X-B3-TraceId: d28a4653e7fca23d
x-amzn-Remapped-Date: Tue, 26 Apr 2022 06:03:31 GMT
Is this means that API Gateway waits for whole response/EOF from integration?
If above case is true , then what's the maximum bytes limit api gateway buffer can hold?
Will API Gateway time out , if response from integration is too large or do not end stipulated time ?

FB API "100 continue" & "500 Internal Server Error" (error_subcode 99)

I have a peculiar issue with the Facebook API. I think it probably has to do with high volume, but that has not brought me any closer to the solution. When posting out messages to Facebook API, I occasionally receive an error such as:
HTTP/1.1 100 Continue
Date: Sat, 17 Dec 2016 19:22:38 GMT
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Origin: *
Pragma: no-cache
Cache-Control: private, no-cache, no-store, must-revalidate
facebook-api-version: v2.3
Expires: Sat, 01 Jan 2000 00:00:00 GMT
x-fb-trace-id: El4BfeJo4vI
x-fb-rev: 2746767
Content-Type: text/html
X-FB-Debug: F3xHF4IY15E3VK9M5acge9B6jBKOEqwP2Ob4F8WsoYRkGeAiY2PkzOjiiawhQ/Uq0TT/Xen+JLZtFXA9ZUsbRg==
Date: Sat, 17 Dec 2016 19:23:08 GMT
Connection: keep-alive
Content-Length: 77
{"error":{"code":1,"message":"An unknown error occurred","error_subcode":99}}
Usually retries later will work for the same request so it would not appear to be the culprit. The issue here, however, is that the message still sometimes appears to go through. How should such responses be handled?
I read on the Continue header, but I'm none the wiser now - especially since it comes with a non-descriptive 500 Internal Server error.
You can probably safely ignore the 100 header; it's correctly been followed up as you have another response (the 500).
You should never really get a a 500 from any site: means their code is broken. Should report it here: https://developers.facebook.com/bugs/
fb-reply
"An unknown error occurred","code":1,"error_subcode":99"
Reason for above error according to facebook:
This error code is an indication that your request timed out. It may be the case that the request is valid, however the maximum processing time for the API was exceeded. Recommendation: Wait a few minutes, and then try again. If the problem persists, please continue filing a bug report.
It has been reported several times that a too long request will result in this type of error-subcodes (99). Try to narrow your request parameters while Facebook doesn't support long requests. (Although the 500 error looks new to me.)
You should use pagination as this document:
https://m.facebook.com/groups/pmdcommunity/?view=permalink&id=1174638509255282

gsoap error: HTTP internal server error: End of file or no input: Operation interrupted or timed out (5 s recv delay)

I have implemented a webservice using gsoap c++, problem is i am getting a random 500 internal error with fault code as "End of file or no input: Operation interrupted or timed out".
with this i have verified the total time of request. All is validated within a matter of milli seconds.
also, i verified one successful response with the problmetic one, all the xml values are identical.
can any one suggest where i might be doing wrong?
following is chunk debug logs from SENT.log created by GSOAP server
<ResponseCode>00</ResponseCode><pDateTime>12055229</pDateTime><R1>null</R1><R2>null</R2><R3>null</R3><R4>null</R4>
HTTP/1.1 500 Internal Server Error
Server: gSOAP/2.8
Content-Type: text/xml; charset=utf-8
Content-Length: 456
Connection: close
SOAP-ENV:ClientEnd of file or no input: Operation interrupted or timed out (5 s recv delay)HTTP/1.1 500 Internal Server Error
Server: gSOAP/2.8
Content-Type: text/xml; charset=utf-8
Content-Length: 456
Connection: close"
Your timeout setting is perhaps too low (only 5 seconds), which means that the connection times out if no data was received within 5 seconds. Set soap->recv_timeout = 30 which increases the data receive timeout to 30 seconds. It depends on your application what timeout settings are acceptable, but 5 seconds is definitely tight.

Is there a way to configure Amazon Cloudfront to delay the time before my S3 object reaches clients by specifying a release date? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I would like to upload content to S3 and but schedule a time at which Cloudfront delivers it to clients rather than immediately vending it to clients upon processing. Is there a configuration option to accomplish this?
EDIT: This time should be able to differ per object in S3.
There is something of a configuration option to allow this, and it does allow you to restrict specific files -- or path prefixes -- from being served up prior to a given date and time... though it's slightly... well, I don't even know what derogatory term to use to describe it. :) But it's the only thing I can come up with that uses entirely built-in functionality.
First, a quick reminder, that public/unauthenticated read access to objects in S3 can be granted at the bucket level with bucket policies, or at the object level, using "make everything public" when uploading the object in the console, or sending x-amz-acl: public-read when uploading via the API. If either or both of these is present, the object is publicly readable, except in the face of any policy denying the same access. Deny always wins over Allow.
So, we can create a bucket policy statement matching a specific file or prefix, denying access prior to a certain date and time.
{
"Version": "2012-10-17",
"Id": "Policy1445197123468",
"Statement": [
{
"Sid": "Stmt1445197117172",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-bucket/hello.txt",
"Condition": {
"DateLessThan": {
"aws:CurrentTime": "2015-10-18T15:55:00.000-0400"
}
}
}
]
}
Using a wildcard would allow everything under a specific path to be subject to the same restriction.
"Resource": "arn:aws:s3:::example-bucket/cant/see/these/yet/*",
This works, even if the object is public.
This example blocks all GET requests for matching objects by anybody, regardless of permissions they may have. Signed URLs, etc., are not sufficient to override this policy.
The policy statement is checked for validity when it is created; however, the object being matched does not have to exist, yet, so if the policy is created before the object, that doesn't make the policy invalid.
Live test:
Before the expiration time: (unrelated request/response headers removed for clarity)
$ curl -v example-bucket.s3.amazonaws.com/hello.txt
> GET /hello.txt HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Sun, 18 Oct 2015 19:54:55 GMT
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host example-bucket.s3.amazonaws.com left intact
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AAAABBBBCCCCDDDD</RequestId><HostId>g0bbl3dyg00kbunc4Ofl1n3n0iz3h3rehahahasqlbot1337kenqweqwel24234kj41l1ke</HostId></Error>
After the specified date and time:
$ curl -v example-bucket.s3.amazonaws.com/hello.txt
> GET /hello.txt HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 18 Oct 2015 19:55:05 GMT
< Last-Modified: Sun, 18 Oct 2015 19:36:17 GMT
< ETag: "78016cea74c298162366b9f86bfc3b16"
< Accept-Ranges: bytes
< Content-Type: text/plain
< Content-Length: 15
< Server: AmazonS3
<
Hello, world!
These tests were done against the S3 REST endpoint for the bucket, but the website endpoint for the same bucket yields the same results -- only the error message is in HTML rather than XML.
The positive aspect of this policy is that since the object is public, the policy can be removed any time after the date passes, because it is denying access before a certain time, rather than allowing access after a certain time -- logically the same, but implemented differently. (If the policy allowed access after rather than denying access before, the policy would have to stick around indefinitely; this way, it can just be deleted.)
You could use custom error documents in either S3 or CloudFront to present the viewer with a slightly nicer output... probably CloudFront, since you can select customize each error code individually, creating a custom 403 page.
The major drawbacks to this approach are, of course, that the policy must be edited for each object or path prefix and even though it works per-object, it's not something that's set per object.
And there is a limit to how many policy statements you can include, because of the size restriction on bucket policies:
Note
Bucket policies are limited to 20 KB in size.
http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html
The other solution that comes to mind involves deploying a reverse proxy component (such as HAProxy) in EC2 between CloudFront and the bucket, passing the requests through and reading the custom metadata from the object's response headers, looking of a header such as x-amz-meta-embargo-until: 2015-10-18T19:55:00Z and comparing its value to the system clock; if the current time is before the cutoff time, the proxy would drop the connection from S3 and replace the response headers and body with a locally-generated 403 message, so the client would not be able to fetch the object until the designated time had passed.
This solution seems fairly straightforward to implement, but requires a non-built-in component, so it doesn't meet the constraint of the question and I haven't built a proof of concept; however, I already use HAProxy with Lua in front of some buckets to give S3 some other capabilities not offered natively, such as removing sensitive custom metadata from responses and modifying, and directing the browser to apply an XSL stylesheet to, the XML on S3 error responses, so there's no obvious reason that comes to mind why this application wouldn't work equally well.
Lambda#edge can apply your customized access control easily

mod_security false positives

I`m getting lots of false positives [??]after just setting up mod_security. I'm running it in detection only so no issues yet but these filters will start blocking requests once I need it to go live.
Afraid I don't 100% understand what the significance of these filters are, I get 100s of them on nearly every domain & all the requests look legitimate.
Request Missing a User Agent Header
Request Missing an Accept Header
What is the best thing to do here? Should I disable these filters? Can I set the severity lower so that requests won't be blocked?
Here is a complete entry
[22/Nov/2011:21:32:37 --0500] u6t6IX8AAAEAAHSiwYMAAAAG 72.47.232.216 38543 72.47.232.216 80
--5fcb9215-B--
GET /Assets/XHTML/mainMenu.html HTTP/1.0
Host: www.domain.com
Content-type: text/html
Cookie: pdgcomm-babble=413300:451807c5d49b8f61024afdd94e57bdc3; __utma=100306584.1343043347.1321115981.1321478968.1321851203.4; __utmz=100306584.1321115981.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=XXXXXXXX%20clip%20ons
--5fcb9215-F--
HTTP/1.1 200 OK
Last-Modified: Wed, 23 Nov 2011 02:01:02 GMT
ETag: "21e2a7a-816d"
Accept-Ranges: bytes
Content-Length: 33133
Vary: Accept-Encoding
Connection: close
Content-Type: text/html
--5fcb9215-H--
Message: Operator EQ matched 0 at REQUEST_HEADERS. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "47"] [id "960015"] [rev "2.2.1"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER_ACCEPT"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"]
Message: Operator EQ matched 0 at REQUEST_HEADERS. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "66"] [id "960009"] [rev "2.2.1"] [msg "Request Missing a User Agent Header"] [severity "NOTICE"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER_UA"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"]
Message: Warning. Operator LT matched 5 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_60_correlation.conf"] [line "33"] [id "981203"] [msg "Inbound Anomaly Score (Total Inbound Score: 4, SQLi=5, XSS=): Request Missing a User Agent Header"]
Stopwatch: 1322015557122593 24656 (- - -)
Stopwatch2: 1322015557122593 24656; combined=23703, p1=214, p2=23251, p3=2, p4=67, p5=168, sr=88, sw=1, l=0, gc=0
Producer: ModSecurity for Apache/2.6.1 (http://www.modsecurity.org/); core ruleset/2.2.1.
Server: Apache/2.2.3 (CentOS)
If you look under Section H of the audit log entry you showed at the Producer line, you will see that you are using the OWASP ModSecurity Core Rule Set (CRS) v2.2.1. In this case, I suggest you review the documentation information on the project page -
https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project#tab=Documentation
Specifically, you should review these two blog posts that I did -
http://blog.spiderlabs.com/2010/11/advanced-topic-of-the-week-traditional-vs-anomaly-scoring-detection-modes.html
http://blog.spiderlabs.com/2011/08/modsecurity-advanced-topic-of-the-week-exception-handling.html
Blog post #1 is useful so that you understand which "mode of operation" you are using for the CRS. By looking at your audit log, it appears you are running in anomaly scoring mode. This is where the rules are doing detection but the blocking decision is being done separately by inspecting the overall anomaly score in the modsecurity_crs_49_inbound_blocking.conf file.
Blog post #2 is useful so that you can decided exactly how you want to handle these two rules. If you feel that these are not important to you - then I would suggest that you use the SecRuleRemoveById directive to disable these rules from your own modsecurity_crs_60_exceptions.conf file. The way that it stands now, these two alert are only generating an inbound anomaly score of 4 - which is below the default threshold of 5 set in the modsecurity_crs_10_config.conf file so it is not blocked.
Looking at your audit log example, while this request did generate alerts, the transaction was not blocked. If it was, the message data under Section H would have stated "Access denied...".
As for the purposed of these rules - they are meant to flag requests that are not generated from standard web browsers (IE, Chrome, Firefox, etc...) as all of these browsers will send both User-Agent and Accept requests headers per the HTTP RFC spec.
One last comment - I would suggest that you use the official OWASP ModSecurity CRS mail-list for these types of questions -
https://lists.owasp.org/mailman/listinfo/owasp-modsecurity-core-rule-set
You can also search the archives that for answers.
Cheers,
Ryan Barnett
ModSecurity Project Lead
OWASP ModSecurity CRS Project Lead
This aren't false positives. Your request headers lack User-Agent and Accept headers. Usually these are sent from scanner- or hack-tools.