My Serverless image handler was working fine till now and now i'm getting the following error.
start_thumbor error: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (nss)
This looks like a problem with the version of pycurl.
Please help me resolve it.
Tried changing the pythong version to 3.6 in ServerlessImageHandler lambda function configuration.
I found a discussion about that issue on https://forums.aws.amazon.com/thread.jspa?messageID=909444, which sent me to https://github.com/awslabs/serverless-image-handler/issues/127#issuecomment-514757029.
Github user timkelty has the solution:
go to my CloudFormation Stack
click Update
"replace template"
paste in https://cf-templates-nestrom.s3-eu-west-1.amazonaws.com/serverless-image-handler/1.0/serverless-image-handler.template
so far has worked for me in us-east-1 and us-west-1
AWS has released a new version of Serverless Image Handler this is why everybody suffers now because Thumbor functionalities fail in the new version.
In the new version, SharpJS is used instead of Thumbor API calls.
You can check the new version and download it from here.
Even though you are able to construct urls in old style, images in subfolders are not possible to access anymore without encoding the url.
Old way:
abcdef.cloudfront.net/team/team1.png
New way:
abcdef.cloudfront.net/{base64encodedPath}
Note 1: If your images are in the root directory of the bucket, you are still able to access them old style like this:
abcdef.cloudfront.net/team1.png
Note 2: If you update your existing CloudFormation stack, you will have your old cloudfront domain (which is a good part).
You can also follow the current fixes from here.
Related
We're using AWS's elasticsearch service. We have a custom synonyms package associated with the elasticsearch domain. The file lives in an s3 bucket, when the file is updated it triggers a lambda that updates the package version in AWS. We followed the documentation at https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/custom-packages.html and so far that all works great for uploading and updating a new version of the package. However, the way aws elasticsearch custom packages work, each domain that uses the custom package stores its own copy of the file as well. To keep search behavior predictable, domains continue to use their current package version until you explicitly update them.
The problem is that I can't find any way to programmatically tell the domain to use the new version of the package. It's like 2 clicks in the console, but I cannot for the life of me find a way to do it from my automation code. The update_package method just updates the package version, it doesn't tell the associated domains to use the new version.
We're looking for a way to programatically tell an AWL elasticsearch domain to update the version of an associated custom synonyms file.
So after banging my head on this for days and I finally figured it out 5 minutes after posting this question. You just need to call the associate_package method again after you update the package. I had actually tried that previously and it failed, but it turns out it fails if there is no new version of the package available. I think I was calling associate_package again before the update_package method was completed, so the new version wasn't available yet.
Anyway, hope this helps someone in the future.
Correct, you have to specifically call associate_package to link to the latest version of that package
I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.
I have pushed my textract code on staging server, and now I am receiving an error.
It is working on a development system. I can't understand why it is happening.
I am using dotnet core 3.0
I am following code sample provided here. [https://github.com/aws-samples/amazon-textract-code-samples/tree/master/src-csharp]
I have a doubt regarding IAM credentials. For this, I installed AWS SDK tools for Windows and AWS CLI on staging server and, after that, ran commands (mentioned here [https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration] ) using Command prompt for configuring. I thought it (IAM) might be getting saved into the environment. But no success.
Code which uploads a file on S3 bucket is uploading it, but while making a request to Textract service, it is crashing.
System.Net.Http.HttpRequestException: Response status code does not indicate success: 400 (Bad Request)
I can't understand what's the issue.
On development, it is working.
Any help?
Finally we found solution. It was very weird issue, never thought it will come.
First thing
Message thrown by API was not clear. So we hosted it on some other server which had upgraded windows OS. There we came to know it is related to keys which was generated during creation of IAM user.
Second thing
Although we got cleared that our application (Amazon's Textract dll) is not able to read keys which we configured from here .
When we configure through CLI, it creates two files for saving credentials and read it from here. Refer below screen shot.
It was there but still application was not able to locate it on staging server. After searching for 4-5 days talking to the AWS support nothing happened.
Finally we dived into IIS made few changes and came to know it is happening at IIS. In IIS there is setting in Appool of instance it was Load User Profile.. By default it come as false but when we turned as true it crated a user like we have user for system log-in.
Refer below screen shots for changing this.
It creates user like this.
Hope it helps some one
I'm trying to use AWS explorer in PyCharm to download and edit an existing lambda function on my AWS account, but I'm unable to find out how to do that. I've read through all the documentation available on the wiki as well as followed a bunch of tutorials on deploying new lambda functions, but I can't find out how to edit and download existing functions. I can download the AWS lambda using the console, but I'm not sure how to get this to be editable in my PyCharm project, but this also seems like a workaround anyway. Is there a way to do this within the AWS Explorer tool?
No, currently (Oct 2019) you can't download a Lambda Function's source and edit it locally. If you know the name of the S3 object where the code is stored, you could pull that file down adn make changes, re-zip it, re-upload it back to S3, force the Lambda to cold-start (change the memory slider) and it will pick up the new code. but this is extremely brittle.
Have you tried cloud9, I find it the best way to work on lambdas, especially if you are working as a team. but the problem with cloud9 is also it seems it's not actively being developed and you have lots of manual work to update SAM and dev tools in there. Anyhow I still recommend cloud9.
I am working on a serverless setup for a project and ran into a strange error. This was working fine before I had to delete my old certificates and make a new one.
In short, I am following the tutorial series at serverless-stack.com for reference, and when running the apig-test command I get the following error.
{ status: 403,
statusText: 'Forbidden',
data: { message: 'Forbidden' } }
This screams to me policy error. So I went to check my policy to make sure it allows execution for the AuthRole and indeed it does. I verified this in IAM section under Roles and looked my services Auth_Role that I created when I set up Cognito.
I don't want to give information overload here, but if anyone has any ideas for where to look next I would be much appreciative and I'll give any details you want to see here.
One thing I want to note is that if I run the apig-test command with the direct URL to the Lambda function instead of my domain it works perfectly fine.
This proves that nothing is wrong with my code but more a policy setting regarding how I setup the domain.
I ran sls create_domain accordingly and I see the entries in the Route53 & API Gateway and they have finished their 40 minutes many hours ago. I insured its using correct certificate since I wiped out the other one.
My custom domains have worked in the past thanks to a plugin I found and this tutorial here (https://serverless.com/blog/serverless-api-gateway-domain/), its only recently that it stopped working when I realized I needed to add some more domains to my SSL cert.
So I assume the policy error is somewhere around this but not sure where to look?
Ok I found the answer. In the API Gateway under custom domains there is a section called Base Path Mappings This MUST be set to one of your functions with the default path of / (or just enter nothing for the path) and then the destination to your lambda service. This seemed to make it work for me.