I'm using Amazon SNS. Notifications work well, but sometimes I get this error:
{
"message": "Endpoint is disabled",
"code": "EndpointDisabled",
"name": "EndpointDisabled",
"statusCode": 400,
"retryable": false
}
Maybe you know why.
You can create a new SNS topic such as push-notification-failures and then associate your APNS/APNS_SANDBOX applications' "Delivery Failures" event to it. Subscribe to the event via email (and confirm) and you'll get useful debugging information about failures. This can all be accomplished through the SNS console and doesn't require API calls to perform.
It is probably worth it to subscribe an HTTP endpoint to this SNS topic and record all delivery failures so you have historical data to work from and debug production issues.
For example a delivery FailureMessage of "Platform token associated with the endpoint is not valid" means that you're sending a message from APNS_SANDBOX to an APNS registered device or vice versa. This can mean that you have the wrong APNS settings for your build system. (We have a frustrating problem of developer built binaries using APNS_SANDBOX vs. TestFlight built binaries using APNS for local testing and QA which is what led me down this path.)
I have found 3 reasons so far:
Sometimes we mixed tokens from sandbox app.
User turn off notifications in phone settings.
User uninstalled the app.
These are regarding Iphons/Ipads.
There are few reasons why an end point can be disabled. I didn't see it documented anywhere (might have missed it), here's what I got from support:
You push to an endpoint but the token is invalid/expired. Tokens become invalid if:
It belongs to an app that is no more installed on the device.
If device has been restored from backup. This renders token invalid and your app should request a new token and update SNS endpoint token accordingly.
App has been re-installed on the same device. In case of Android, the app is assigned a new token. This happens as well with APNs but more often with Android.
In case of APNs, a wrong provisioning profile is selected in xCode. In this case notifications fail and device becomes disabled later after APNs feedback.
If mistakenly use a token for IOS development to IOS production app and vice versa.
If Apple for any reason invalidates your IOS push cert or someone revokes the push cert from itunes connect portal. This takes a few hours before device gets disabled.
Same with GCM if you update API key from Google developer console without updating the Platform application credentials in SNS.
You push to an APNs device endpoint but application has been disabled due to expired push certificate.
You push to GCM device endpoint however API key has been updated in Google developer console but not the SNS platform application credentials accordingly.
For Details, I recommend this excellent article which solves my problem
According to http://docs.aws.amazon.com/sns/latest/APIReference/API_Publish.html that means that the endpoint is disabled.
From http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sns/model/SetEndpointAttributesRequest.html:
Enabled -- flag that enables/disables delivery to the endpoint. Message Processor will set this to false when a notification service indicates to SNS that the endpoint is invalid. Users can set it back to true, typically after updating Token.
"notification service" in this case is referring to Google's GCM, Apples APNS or Amazon's ADM.
I had the same issue.
This is what I did:
export the FULL CERTIFICATE from Keychain Access to a .p12 file
export the PRIVATE KEY from Keychange Access to a *private.p12 file
use openssl with the downloaded .cer file (from iOS Developer MemberCenter) to create a public .pem certificate
use openssl with the generated *private.p12 file to create a private .pem keyfile
In AWS SNS create a new Application. Give it a name. Choose Apple Development.
Choose the FULL CERTIFICATE from Keychain Access with a .p12 extension, and type in the passphrase you chose when exporting from Keychain Access
Copy the content of the public CERTIFICATE .pem file, to the textarea labelled "Certificate", including the starting and ending lines:
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
Copy only the part of the private key .pem file starting and ending with the following lines, to the textarea labelled "Private Key":
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
I use Cordova with phonegap-plugin-push 1.4.4, but it my issue had nothing to do with phonecap. Apart from a bit of confusion about the above, what finally did the trick for me, was to open up my project in XCode, find the Target for my project, and then enable Push Notifications. This automatically adds the "Push Notifications" entitlement to the app ID.. The next time the app is installed on your device, push notification should work. At least it did for me.
I hope this can save someone experiencing the same issue as me a 1/2 day of work! :)
Quick checklist before taking drastic measures:
Generate the Certificate Signing Request (CSR) using Keychain App.
Export the APNS certificate and its private key into a single p12 file using Keychain App.
When you create a new application in Amazon SNS, the platform must match the APNS environment (Development/Production on both sides).
When you request a device token, you must be in the right application (the application's bundle identifier matches the APNS certificate).
When you create a new platform endpoint in AWS SNS, the device token must be added to the right application (the good application certificate and the good Development/Production platform).
In my case I generated the CSR using a third party SSL tool. I obtained a valid certificate from Apple developer portal but without the private key. Then I tried Windows' certificate tool to export without great success. Waste of time. Start your Mac.
Then I used the AmazonMobilePush sample app to get a device token. Because the demo's bundle identifier doesn't match my certificate, the endpoint was invalid. At each SNS sending the endpoint became disabled (false). At the end the cause was obvious, but I still lose precious time.
If you get the error End Point is Disabled, use the code below to enable the endPoint and then Push Notification using Amazon credentials:
*//Enable Device*
var sns = new AmazonSimpleNotificationServiceClient("AwsAccesskeyId", "AwsSecrteAccessKey", RegionEndpoint.USWest1);
Dictionary<string, string> objDictCheckEndpointEnable = new Dictionary<string, string>();
objDictCheckEndpointEnable.Add("Enabled", "False");
sns.SetEndpointAttributes(new SetEndpointAttributesRequest
{
Attributes = objDictCheckEndpointEnable,
EndpointArn = "AwsEndPointArn" //This is Device End Point Arn
});
*//End*
For me, I was getting the "Platform token associated with the endpoint is not valid" because my SNS Platform Application Endpoints were not set up correctly. Specifically, the SNS console was not reading the credentials correctly from my .p12 file even though it contained the correct cert and private key. The solution, based on this post, was to create a second .p12 file that contained the cert and no key. I loaded the credentials from the first .p12 file, and then loaded the credentials second .p12 file. I could see the cert string change when I did so, and afterward I had no problems.
If you are creating a production endpoint, SNS will warn you about mismatched certs, but it does no such checking for development endpoints. The only way you will know that the endpoint is borked is when you get the platform token error.
I sure hope this helps somebody out there, as it drove me to distraction.
I am using this. If the get endpoint response finds the NotFound error, it creates an endpoint (this should never happen, but hell, it's on AWS SNS documentation website).
If that doesn't happen, it means you're getting the info for the endpoint. It can either be ok (tokens match and enabled is true), or the other way around (in which case you need to update it).
- (void)getEndpointDetailsWithResponse:(void(^)(AWSSNSGetEndpointAttributesResponse *response, AWSTask *))handleResponse {
NSString * deviceTokenForAWS = [self deviceTokenForAWS];
AWSSNS *manager = [AWSSNS SNSForKey:#"EUWest1SNS"];
AWSSNSGetEndpointAttributesInput *input = [AWSSNSGetEndpointAttributesInput new];
input.endpointArn = self.endpointArn;
AWSTask *getEndpointAttributesTask = [manager getEndpointAttributes:input];
[getEndpointAttributesTask continueWithBlock:^id(AWSTask *task) {
NSLog(#"%# Error: %#", task.result, task.error);
AWSSNSGetEndpointAttributesResponse *result = task.result;
NSError *error = task.error;
if (error.code == AWSSNSErrorNotFound) {
[self createEndpointWithResponse:^(AWSSNSCreateEndpointResponse *createResponse) {
dispatch_async(dispatch_get_main_queue(), ^{
if (handleResponse != nil) {
handleResponse(result, task);
}
});
}];
} else {
NSLog(#"response for get endpoint attributes : %#", result);
NSString *token = [result.attributes valueForKey:#"Token"];
NSString *enabled = [result.attributes valueForKey:#"Enabled"];
NSLog(#"token : %#, enabled : %#", token, enabled);
BOOL wasSuccessful = [token isEqualToString:deviceTokenForAWS] && ([enabled localizedCaseInsensitiveCompare:#"true"] == NSOrderedSame);
if (!wasSuccessful) {
NSLog(#"device token does not match the AWS token OR it is disabled!");
NSLog(#"Need to update the endpoint");
AWSSNSSetEndpointAttributesInput *seai = [AWSSNSSetEndpointAttributesInput new];
seai.endpointArn = self.endpointArn;
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:deviceTokenForAWS, #"Token", #"true", #"Enabled", nil];
seai.attributes = attributes;
AWSTask *setEndpointAttributesTask = [manager setEndpointAttributes:seai];
[setEndpointAttributesTask continueWithBlock:^id(AWSTask *task) {
NSLog(#"response : %#, error: %#", task.result, task.error);
dispatch_async(dispatch_get_main_queue(), ^{
if (handleResponse != nil) {
handleResponse(result, task);
}
});
return nil;
}];
} else {
NSLog(#"all is good with the endpoint");
dispatch_async(dispatch_get_main_queue(), ^{
if (handleResponse != nil) {
handleResponse(result, task);
}
});
}
}
return nil;
}];
}
This is the exact replica of the AWS SNS token management documentation found here: https://mobile.awsblog.com/post/Tx223MJB0XKV9RU/Mobile-token-management-with-Amazon-SNS
I can attach the rest of my implementation if needed, but this part is the most important one.
Related
I've got a Next.js application that uses AWS Cognito userpools for authentication. I have a custom UI and am using the aws-amplify package directly invoking signIn/signOut/etc... in my code. (I previously used the AWS Hosted UI and had the same problem set out below - I hoped switching and digging into the actual APIs who reveal my problem but it hasn't)
Everything in development (running on localhost) is working correctly - I'm able to login and get access to my current session both in a page's render function using
import { Auth } from 'aws-amplify';
...
export default const MyPage = (props) => {
useEffect(async () => {
const session = await Auth.currentSession();
...
}
...
}
and during SSR
import { withSSRContext } from 'aws-amplify';
...
export async function getServerSideProps(context) {
...
const SSR = withSSRContext(context);
const session = await SSR.Auth.currentSession();
...
}
However, when I deploy to AWS Amplify where I run my staging environment, the call to get the current session during SSR fails. This results in the page rendering as if the user is not logged in then switching when the client is able to determine that the user is in fact logged in.
Current Hypothesis - missing cookies(??):
I've checked that during the login process that the AWS cookies are being set correctly in the browser. I've also checked and devtools tells me the cookies are correctly being sent to the server with the request.
However, if I log out context.req.headers inside getServerSideProps in my staging environment, the cookie header is missing (whereas in my dev environment it appears correctly). If this is true, this would explain what I'm seeing as getServerSideProps isn't seeing my auth tokens, etc... but I can't see why the cookie headers would be stripped?
Has anyone seen anything like this before? Is this even possible? If so, why would this happen? I assume I'm missing something, e.g. config related, but I feel like I've followed the docs pretty closely - my current conf looks like this
Amplify.configure({
Auth: {...}
ssr: true
});
Next.js version is 11.1.2 (latest)
Any help very much appreciated!
You have to use Next#11.0.0 to use getServerSideProps, withSSRContext and Auth module in production.
I had same issue.
My solution was that disconnect a branch has an authentication problem once and reconnect the branch.
What are your build settings? I guess you are using next build && next export in which case this getServerSideProps shall not work. See https://nextjs.org/docs/advanced-features/static-html-export#unsupported-features
To use SSR with AWS amplify see https://docs.aws.amazon.com/amplify/latest/userguide/server-side-rendering-amplify.html#redeploy-ssg-to-ssr or consider deploying on a node server that is actually a server that you can start with next start like AWS EC2 or deploy on Vercel.
Otherwise if you use next export have to make do with client side data fetch only with client side updates only and cannot use dynamic server side features of nextjs.
One reason for context.req.headers not having any cookie in it is because CloudFront distribution is not forwarding any cookies.
This “CloudFront Behaviour” can be changed in two ways:
Forward all cookies, OR
Forward specified cookies (i.e. array of cookie names)
To change the behaviour, navigate to CloudFront on AWS console > Distributions > your_distribution > Behaviors Tab.
Then Edit existing or Create new behaviour > Change cookies settings (for example set it to "All")
I'm trying to create a custom authentication method for Google cloud endpoints. The idea being I can configure my ESPv2 container (an Extensible service proxy based on Envoy), which is hosted on Google cloud run, to obtain JWT's from a custom issuer, also hosted on cloud run.
Following the guide Endpoints guide for gRPC, I figure the jwks_uri: part of the yaml file should point to a URL which exposes the public key (which I figure you can do by putting a JWK into a json file and hosting said JSON file on google cloud storage, exposing it to the public internet).
The part that has me stumped is the issuer, I've gone through RFC7519, which states that the issuer is a string or URI value. I'm not very familiar with the specific implementation of Envoy that the ESPv2 container uses, but my best guess is the issuer: option in the yaml file is simply used to match against the domain or string that was issued from the server when the token was created.
I'm probably wrong so I'd really appreciate some guidance on this one.
Kind regards,
Despicable B
issuer should be the "iss" field in the JWT token that you send to ESPv2.
Author's Solution
After working with the Google Cloud endpoints team, and some of the contributors for ESPv2, we figured it out (as well as found a few things to point out to anyone wanting to do this in future).
Addressing the original question
Indeed as Wayne Zhang pointed out, the issuer can be ANY string value so long as it matches the "iss" claim in the JWT payload.
e.g.
authentication:
providers:
- id: some-fancy-id
issuer: fart # <-- Don't wrapping ANY these in double-quotes
jwks_uri: https://storage.googleapis.com/your-public-bucket/jwk-public-key.json
audiences: some-specific-name
and then in your (decoded) JWT
// Header
{
"alg": "RS256",
"kid": "custom-id-system-you-specify",
"typ": "JWT"
}
// Payload
{
"aud": [
"some-specific-name"
],
"exp": 1590139950, <-- MUST be INTEGER value
"iat": 1590136350, <-- ^^
"iss": "fart",
"sub": "Here is some sulphur dioxide"
}
Error/Bug #1 - "iat" and "exp" should be an integer NOT a string
As you can already see from the above decoded JWT, the "exp" and "iat" claims MUST be integer values (this can be seen clearly in the RFC7519 section 4.1.4 and 4.1.6).
This seems like a simple mistake, but as the ESPv2 contributors and I found, the error messages weren't particularly helpful at helping the developer figure out what the problem was.
For example, if you had written the "iat" and "exp" claims as strings rather than integers, the ESPv2 container would inform the dev that his JWT was either not proper Base64URL formatted or was invalid JSON. Which, to the unaware, might seem like you've used the library incorrectly.
Some changes to the error messages were made to address this in future, you can see the issue that was raised, and its conclusion here.
Error #2 - Wrong key, and JSON format
Before claiming victory over this battle of attrition, I ran into one more error which was just about as vague as the previous.
When trying to call a method that required authentication, I was greeted with the following
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Jwks remote fetch is failed"
debug_error_string = "{"created":"#1590054504.221608572","description":"Error received from peer ipv4:216.239.36.53:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Jwks remote fetch is failed","grpc_status":16}"
>
Which you might think means that the ESPv2 couldn't retrieve your key.
The cause was three related issues.
ESPv2 only supports X509 and RSA key pairs, so don't make the same mistake I did and use EC generated key pairs.
jwcrypto does NOT add the "alg" and "kid" claims to your key file by default, make sure these are added or jwcrypto won't know what algorithm to use when signing any JWTs you've generated.
The final error was the format of the JSON file. When you call the methods to export the keys, you get the following:
{
"e":"XXXX",
"kty":"RSA",
"n":"crazyRandomNumbersAndLetters",
"alg": "RS256", <-- NOT ADDED BY DEFAULT
"kid": "custom-id-system-you-specify" <-- ^^
}
Simply providing a URL to this in a JSON file is incorrect. The proper format is as follows:
{
"keys": [
{
"e":"XXXX",
"kty":"RSA",
"n":"crazyRandomNumbersAndLetters",
"alg": "RS256", <-- NOT ADDED BY DEFAULT
"kid": "custom-id-system-you-specify" <-- ^^
}
]
}
If you do all this, it should be smooth sailing.
I apologise for getting a little off topic, but I hope others don't have to jump through as many hoops as I did to get this going :)
I can't thank the developers at ESPv2 enough for their quick replies and insight into the problem. Top job!
Best of luck coding!
Despicable B.
I am trying to receive data via Autodesk Data Management API. So far I've created an Forge-App and connected it with a BIM360 Integration.
Then I wanted to get a list of all hubs, but when I do so, I receive an JSON-Object which contains a warning:
warnings: [{
"AboutLink":null,
"Detail":""You don't have permission to access this API",
"ErrorCode": "BIM360DM_ERROR",
"HttpStatusCode": "403",
...
}]
I called the webservice via AJAX wich looks like that:
this.getToken(function(token) {
$.ajax({
url: "https://developer.api.autodesk.com/project/v1/hubs",
beforeSend: function(xhr) {
xhr.setRequestHeader("Authorization", "Bearer "+token);
}
}).done(...);
The token is a 3-legged one. I am not sure which API I do not have permission for because I am pretty sure, that I have permission for BIM360.(I created the Integration as an administrator).
In addition to was ZHong mentioned, I would suggest you try this sample. It will ask you to provision your Forge Client ID under your BIM 360 settings, just follow the steps that the app will present.
On both 2- or 3-legged, the app accessing the data (Forge Client ID) needs authorization from the account admin. Without that, the Hubs endpoint will not return your BIM 360 hub, and inside that, the sample applies for Projects endpoint.
Does everything else work fine? For example, can you get all the hubs successfully? I just verified on my side, and I can see the response including the same warning as you mentioned, but the hubs are listed correctly, and you can get the projects/items/versions without problem. I pasted my postman response as follow.
If you check the blog https://forge.autodesk.com/blog/tutorial-using-curl-3-legged-authentication-bim-360-docs-upload, it also has the same warning, but seems no impact to the following operation. I am not exactly sure what the warning means, l will check and update the details, but so far, it seems you can ignore it for now.
I am trying to secure one endpoint that I have for the Docusign Connect API. I have checked the Sign Message with X509 Certificate on docusign connect API configuration.
The Client Certificate Common Name is added to the Docusign Account as well.
I am trying to validate the subject sent using the rails-auth gem.
Following is the content of the ACL file(acl.yml)
---
- resources:
- method: POST
path: /
allow_x509_subject:
cn: "the common name"
I have added the following in the config.ru file
app = Rails.application
acl = Rails::Auth::ACL.from_yaml(
File.read("path of the acl.yml"),
matchers: { allow_x509_subject: Rails::Auth::X509::Matcher }
)
acl_auth = Rails::Auth::ACL::Middleware.new(app, acl: acl)
x509_auth = Rails::Auth::X509::Middleware.new(
acl_auth,
ca_file: "path_to_the_pem_file.crt",
cert_filters: { 'X-SSL-Client-Cert' => :pem })
run x509_auth
I am getting the following exception.
*** Exception Rails::Auth::NotAuthorizedError in Rack application object (unauthorized request)
The Common Name that is added in the Docusign Account is the same as the one that I have mentioned in YML file. Could somebody please help me in finding the issue here?
I am using Ruby 2.2.2 with rails 4.2.2, rails-auth 2.0.3
The best way to secure connect endpoint and ensure that all calls are coming from DocuSign is to use HMAC Security.
When using HMAC security, each message sent from your DocuSign Connect account includes additional header values, one for each HMAC key you have defined (up to one hundred), which will contain the message body hashed with one of your HMAC keys, using HMACSHA256. For example, if you have defined two keys then two headers, X-DocuSign-Signature-1 and X-DocuSign-Signature-2, will be added. They will contain the message body hashed with your first and second secret keys, respectively.
https://developers.docusign.com/esign-rest-api/guides/connect-hmac
I am trying to send and receive messages from a local workgroup machine (Windows 7), call it the 'client', to Service Bus 1.0 set up on a workgroup server (hosted on AWS EC2). After many trials and research I'm unable to send messages from the client machine to the server. I've followed a number of articles that appear to indicate that it is possible to do, but I cannot resolve the authentication issue I'm seeing.
Connecting to Windows Server Service Bus on AWS
Microsoft Service Bus on a Windows Workgroup
I note the Microsoft system requirements appears to indicate that it is "not supported" and "not possible". My question is can this be done, and has anyone had success? Any help would be greatly appreciated.
msdn.microsoft.com/en-us/library/windowsazure/jj193011(v=azure.10).aspx
My attempts include using either the WindowsTokenProvider and OAuthTokenProvider. I get the same result:
System.UnauthorizedAccessException: The token provider was unable to provide a security token while accessing 'https://xx.xx.xx.xx:9355/ServiceBusDefaultNamespace/$STS/Windows/'. Token provider returned message: ''. ---> System.IdentityModel.Tokens.SecurityTokenException: The token provider was unable to provide a security token while accessing 'https://xx.xx.xx.xx:9355/ServiceBusDefaultNamespace/$STS/Windows/'. Token provider returned message: ''. ---> System.Net.WebException: The remote server returned an error: (401) Unauthorized.
The Service Bus namespace is set up with AddressingScheme "Path" for a workgroup install. And the client side connection string includes the IP to reach the server, and I've set a RemoteCertificateValidationCallback before creating the queues.
Endpoint=sb://xx.xx.xx.xx/ServiceBusDefaultNamespace;StsEndpoint=https://xx.xx.xx.xx:9355/ServiceBusDefaultNamespace;RuntimePort=9354;ManagementPort=9355;WindowsUsername=SBUser;WindowsDomain=[NotUsed];WindowsPassword=[Password]
Code to attach to the remote queue and send a message is as follows:
ServiceBusConnectionStringBuilder connBuilder = new ServiceBusConnectionStringBuilder(ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"]); // Gets the connection string above
TokenProvider tokenProvider = WindowsTokenProvider.CreateWindowsTokenProvider(connBuilder.StsEndpoints, new NetworkCredential(connBuilder.WindowsCredentialUsername, connBuilder.WindowsCredentialPassword));
MessagingFactorySettings messagingFactorySettings = new MessagingFactorySettings();
messagingFactorySettings.TokenProvider = tokenProvider;
MessagingFactory messagingFactory = MessagingFactory.Create(connBuilder.GetAbsoluteRuntimeEndpoints(), messagingFactorySettings);
requestQueue = messagingFactory.CreateQueueClient("RequestQueue");
...
requestQueue.Send(sendMessage); // Fails here
The server account is SBUser with a password and I have left the domain/host specified blank on the token provider. I note that the Event Viewer on the server shows the authentication being attempted is the client's user account not the one from the token provider. Why is this? I'm obviously missing something in order to authenticate on the server.
An account failed to log on.
Subject:
Security ID: NULL SID
Account Name: -
Account Domain: -
Logon ID: 0x0
Logon Type: 3
Account For Which Logon Failed:
Security ID: NULL SID
Account Name: [ClientLogin]
Account Domain: [ClientMahcine]
Failure Information:
Failure Reason: Unknown user name or bad password.
Status: 0xc000006d
Sub Status: 0xc0000064
Appreciate any help. Thanks.
Try with the OAuthTokenProvider and make sure that connBuilder is passing the right values.
TokenProvider tokenProvider = TokenProvider.CreateOAuthTokenProvider (connBuilder.StsEndpoints, new NetworkCredential(connBuilder.WindowsCredentialUsername, connBuilder.WindowsCredentialPassword));
Once you try this, please, reply with the exception you get in your client. Also, in the server, search for a event in the Service Bus section that would give more details about the exception.
With that information we should continue to the next step.
Did you get to the bottom of this?
I have managed to get around the exact same issue by setting the Fully Qualified domain name of the server that the certificate is bound to in the client machine hosts file.
So where you have entered the IP address in the connection string, you should instead enter 'AMAZONA-PQxxxxx'. And in your hosts file, have the 'AMAZONA-PQxxxxx' resolve to the IP address.
we had the same issues. Server W2k12R2, standalone, Workgroup; Client Windows 7, same Workgroup.
It's necessary to have the same user accounts on both systems. Looks like this is some kind of "authentication proxy stuff" running.
Take a look at the compatibility matrix mentioned above:
http://msdn.microsoft.com/en-us/library/windowsazure/jj193011(v=azure.10).aspx
Thank you,
Holger