Using Glyphs with Amazon Location Service and Mapbox-GL - amazon-web-services

I am using the Amazon Location Service with React, react-map-gl and Mapbox-GL. I can successfully load ESRI and HERE maps which suggests my authentication is OK but I seem to have trouble with accessing Glyphs (fonts). I am trying to add a cluster markers feature like this. I can add the points and load the base layer but when I try to add the point counts there is an error accessing the glyph. It is sending a request like this:
https://maps.geo.eu-west-1.amazonaws.com/maps/v0/maps/<MY_MAP>/glyphs/Noto%20Sans,Arial%20Unicode/0-255.pbf?<....SOME_AUTHENTICATION_STUFF>
This seems to match the request format shown here: https://docs.aws.amazon.com/location-maps/latest/APIReference/location-maps-api.pdf
But it responds with: {"message":"Esri glyph resource not found"}
I get a similar error message with HERE maps and different fonts. I have added the following to the action on the role with no success (it loads the map but not glyphs)
Tried this:
"geo:GetMap*"
And this:
"geo:GetMapStyleDescriptor",
"geo:GetMapGlyphs",
"geo:GetMapSprites",
"geo:GetMapTile"
What do I have to do to setup glyphs correctly in the Amazon Location Service? I have not configured anything just hoped they would naturally work. Have I missed a step? Can't see anything online about it.
Is there a work around where I could load the system font instead of a remote glyph?
I am using the following versions which are not the most recent as the most recent are incompatible with Amazon Location Service:
"mapbox-gl": "^1.13.0",
"react-map-gl": "^5.2.11",

The default font stack (Noto Sans, Arial Unicode) for the cluster layer isn't currently available via Amazon Location. You will need to change the font stack used by the cluster layer to something in the supported list: https://docs.aws.amazon.com/location-maps/latest/APIReference/API_GetMapGlyphs.html#API_GetMapGlyphs_RequestSyntax

Related

AWS SageMaker Domain Status "Update_Failed" due to custom image appImageConfigName error

I'm having some trouble recovering from failures in attaching custom images to my sagemaker domain.
I first created a custom image according to here.
When I use sagemaker console to attach the image built with sm-docker, it appears to successfully "attach" in the domain's image list, but when inspecting the image in the console, it shows an error:
Value '' at 'appImageConfigName' failed to satisfy constraint: Member
must satisfy regular expression pattern
This occurs even when the repository or tag are comprised of only alphanumeric characters.
After obtaining this error, I deleted the repositories in ECR.
Since then, the domain fails to update and I am unable to launch any apps or attempt to attach additional images.
The first issue I would like to address is restoring functionality of my sagemaker domain so I can further troubleshoot the issue. I am unable to delete the domain because of this error, even when there are no users, apps, or custom images associated with the domain.
The second issue I would like to address is being able troubleshoot the appImageConfigName error.
Thanks!
While I was unable to delete the domain via console, I was able to delete it via cli.

Recover EFS with aws start-restore-job in OneZone

I didn't find the AvailabilityZoneName parameter in the startRestoreJob SDK
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Backup.html#startRestoreJob-property
For this reason, when I restore the snapshot, it is created as REGIONAL.
The AWS console itself allows you to select this when you restore. Does anyone know a solution?
I was confronted with the same problem. the documentation seems not aligned. i check on cloudtrail but i have a HIDDEN_DUR_TO_SECURITY_REASONS placeholder...
But in Developper mode on chrome you can see metadata attribute sent to the server. so you need to use availabilityZoneName and singleAzFilesystem parameters.
You can pass the file system type information in the startRestoreJob API in the Metadata property.
To the the values allowed, you can call the GetRecoveryPointRestoreMetadata API to get the Metadata value for your Recovery Point, and then use the values you get to pass to the StartRestoreJob API.
Docs for the GetRecoveryPointRestoreMetadata API: https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRecoveryPointRestoreMetadata.html

AWS Rekognition Custom Labels Training "The manifest file contains too many invalid data objects" error

I'm trying to do a quick PoC on the AWS Rekognition custom labels feature. I'd like to try using it for object detection.
I've had a couple of attempts at setting it up using only tools in the AWS Console. I'm using images imported from the Rekognition bucket in S3, then I added bounding boxes using the tools in the Rekognition console.
All of my images are marked up with bounding boxes, no whole image labels have been used. I have 9 labels, all of which appear in at least 1 drawing.
I've ensured my images are less than 4096x4096 in size (which is mentioned on this AWS forums thread as a possible cause of this issue.
When I attempt to train my model I get the "The manifest file contains too many invalid data objects" error.
What could be wrong here? An error message complaining about the format of a file I didn't create manually, that I can't see or edit isn't exactly intuitive.

Getting error, "Entity doesn't exist in AsyncLocal" when trying to call CreateBatchWrite<T> method of DynamoDBContext object

I have created a DynamoDb table in my dev machine and I'm trying to insert couple of rows from my .NET Core application using the CreateBatchWrite<T> method of DynamoDBContext object. I'm able to query the table from DynamoDB Javascript Shell window from the localhost:8000/shell url and it returns row count as 0. But when trying to call the CreateBatchWrite<T> method I get the error, "Entity doesn't exist in AsyncLocal".
Explanation
When using X-Ray, this happens when there is an attempt to create a SubSegment without a Parent Segment. Depending on your setup, when you run a query it might try creating a SubSegment, but it's failing because there is no parent segment.
This is common when running a Lambda function locally, as the Mock Lambda Test Tool will not create a Segment for you like the actual Lambda environment does on AWS. This can happen in other scenarios too.
More details here: https://github.com/aws/aws-xray-sdk-dotnet/issues/125
Solution
Easiest way to solve this is disabling X-Ray locally (as you probably don't want to generate traces locally):
In appsettings.Development.json add this:
"XRay": {
"DisableXRayTracing": "true",
"UseRuntimeErrors": "false",
"CollectSqlQueries": "false"
}
The important bit is the DisableXRayTracing equals true.
Make sure your appsettings.Development.json is set to Copy Always in the properties window. You can do this by including this in your .csproj:
<ItemGroup>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="appsettings.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
If you really want to trace things locally, then make sure you create
a parent segment only when running locally (on AWS this would cause
problems as you would have two parent segments, one created manually
by you, another one created by AWS).
Add this line before any DynamoDB API methods are executed:
AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
You can find more info in GitHub discussion https://github.com/aws/aws-xray-sdk-dotnet/issues/69#issuecomment-482688754
Also, you will need to import these 2 packages.
using Amazon.XRay.Recorder.Core;
using Amazon.XRay.Recorder.Core.Strategies;
If you are tracing requests made with the AWS SDK, the X-Ray SDK attempts to generate a subsegment automatically to represent those requests, such as CreateBatchWrite. However, a subsegment can only be created as the child of an existing Segment, so if you have not created a segment beforehand that Entity doesn't exist error will occur.
See these docs for how to create custom segments. Alternatively, if you are developing a web app, the X-Ray SDK can automatically create segments for requests made to your service by adding configuration described in these docs

Serverless Image Handler - How to set subfolder as root

Hi i got the Serverless Image Handler up and running (using this template: https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/deployment.html). Deployment worked fine, all good.
I pointed it to my already existing bucket "MyBucket", and i can do image rescaling and stuff when placing images into that bucket.
However we have all our images in a subfolder to that bucket, called "cloudfront_assets".
So after assigning my CNAME to the new cloudfront distribution, i am stuck with having to reference my images like this:
https://subdomain.mydomain.com/cloudfront_assets/image.jpg
instead of
https://subdomain.mydomain.com/image.jpg
I tried editing the cloudfront disitrbutions origin settings, and set "Origin Path" from /image to things like /cloudfront_assets or /image/cloudfront_assets.
It fixed the path issue, so i didnt have to write the "/cloudfront_assets/" before the image, but regardless of what i set, the image rescaling stopped working.
What is the correct way to do this?
Please help, currently stuck at the moment
Set the log level to debug in the lambda function in order to see whats happening, but it only says its getting "access denied" as far as i can tell
The handler supports a rewrite functionality that allows you to modify the url, that is likely to be the simplest way to achieve it:
https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/appendix-b.html
Basically, you can rewrite all url's to always append /cloudfront_assets/, similar to how the example rewrites to add /fit-in/
Rewriting something like .* should catch pretty much everything. As the code is python based, you should use python regexp syntax.
The underlying code for the function can be found in the github repos: https://github.com/awslabs/serverless-image-handler/blob/master/source/image-handler/lambda_rewrite.py