Lost ability to edit code in AWS Lambda console - amazon-web-services

I have several Lambdas deployed to AWS, all created as single file function in the console. All was working fine until I flushed my caches and cookies in chrome. Then the function codes will no longer show up in the browser, any browser, I tried 3. Also all the Lambda functions think they are all zip file based so I cannot reenter the code from my git repo. The functions still operate properly, I just cannot edit them.
All new functions I create are also not in console editing mode. Something general / global has changed, not specific to any one function.
What can cause this? And across all browsers?
Most importantly how can I fix this?

You can download your code as a zip file if you click right on Actions > Export Function and then Download deployment package. Maybe re-uploading the packages will fix your issue.

Related

Cloud Function build error - failed to get OS from config file for image

I'm seeing this Cloud Build error when I try to deploy a Cloud Function:
"Step #2 - "analyzer": [31;1mERROR: [0mfailed to initialize cache: failed to create image cache: accessing cache image "us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest": failed to get OS from config file for image 'us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest'"
I'm able to build and emulate the cloud function locally, but I can't deploy it due to this error. I was able to deploy just fine until now. I've looked everywhere and I can't find any discussion about this. Anyone know what's going on here?
UPDATE: I deployed a new function 3 days ago and now I can't seem to deploy an update to it. I get the same error. I'm fairly sure this is happening due to the lifecycle rule I set up to ensure I don't keep storing images of functions: Firebase storage artifacts is huge and keeps increasing. This rule is important to keep around because I don't want to pay for unnecessary storage, but it seems like it might be the source of our problem here. Can someone from Google look into this?
I got the same error, even for code that deployed successfully before.
A workaround is to delete the Docker images for the failing Firebase functions inside Container Registry and re-deploying the functions. (The images will be re-created upon deploying.)
The error still occurs sporadically, so I suspect this may be a bug introduced in Firebase's deployment process. Thankfully for now, the workaround above resolves the issue every time the error comes up.
I also encountered the same problem, and solved it by deleting the images in the Container Registry of Firebase Project.
I made a Script at that time, and I'll put it here. The usage is as follows. Please use it if you like.
Install the Google Cloud SDK.
Download the Script
Edit CONTAINER_REGISTRY to your registry name. For example: CONTAINER_REGISTRY=asia.gcr.io/project-name/gcf/asia-northeast1
Grant execute permission. - $ chmod +x script.sh
Execute it. - $ sh script.sh
Deploy your functions.
I'm having the same problem for the last few days and in contact with the support. I had the same log and in my case it wasn't connected to the artifacts because the artifacts rebuild themselves automatically on deploy (read below about a subtle case related to the artifacts and how to fix it), but deleting the functions and redeploying solved it for me.
Artifacts auto cleanup
Note that if the artifacts bucket is empty, then the problem is somewhere else.
But if it's not empty, what you can do to resolve any possible problems related to the artifacts auto cleanup, is to delete the whole "container" folder manually in the artifacts which should solve it. Then just redeploy again.
Make sure not to delete the artifacts bucket itself!
Dough from firebase confirmed in the question you referring to that removing the artifacts content is safe.
So, here is how to delete it:
go to the google cloud console, select your project -> storage -> browser https://console.cloud.google.com/storage/browser
Select the "artifacts" bucket
Choose "containers" and delete it
If the problem was here, it should work fine after that.
This happens because the deletion rule you refer to in your question checks the "last updated" timestamp of each file while on redeploy only some files are updated. So the next day the rule will delete some of the files while leaving the others which will lead to the inconsistent state of the bucket in this case. So you just remove everything manually.

Enable inline editing for the Go code in the AWS Lambda

As inline code editing is not enabled for the Go code on AWS Lambda, I am trying to create a Google Chrome Extention to be able to edit the Go code by referring to the text or zip code on the S3 bucket. It would be nice if I could also deploy the updated Go code on the Lambda.
I think I will have to perform the following steps from the extension-
Get the Go code from the S3 bucket or Github
Update it
Create a zip file from the updated code
Upload the zip file to the S3 bucket or Github
Deploy the updated zip file on the Lambda
I have no idea if it is a good approach or if there is any other approach possible for this. I would appreciate it if anyone can suggest to me a better approach or tell me if what I am thinking is feasible or not.
I like the idea, but unfortunately I am not sure if that is a good idea.
Let me explain:
All the languages that AWS Lambda supports which allow inline editing are more or less interpreted languages: Javascript, Python etc.
The AWS runtime for those languages reads plain text files and compiles/runs them.
Since you deploy plain text files and the runtime takes care of running them, the AWS Lambda console allows you to edit those files.
Go on the other hand, as well as supported languages like Swift or Java, need to be deployed as a "binary" (I use air quotes because Java JAR is strictly seen not a binary but byte code which is then interpreted by the JVM ..) to AWS.
The AWS Lambda runtime for those languages expects a binary and not plain text. That is why you can not edit the code of Lambdas using those runtimes in the AWS console.
So even if you would open that ZIP, you would not find editable code.
Of course you could put the binary and the plain text code in that ZIP and then when you open that ZIP through your Chrome extension, you could show the plain text code to the user.
But then there is the matter of compiling the code into a binary that the AWS Lambda Go runtime can actually run.
So you Chrome extension would need to bundle a Go compiler. Not sure if that is possible. But I am sure it would not be trivial.

How to download and edit lambda with AWS explorer

I'm trying to use AWS explorer in PyCharm to download and edit an existing lambda function on my AWS account, but I'm unable to find out how to do that. I've read through all the documentation available on the wiki as well as followed a bunch of tutorials on deploying new lambda functions, but I can't find out how to edit and download existing functions. I can download the AWS lambda using the console, but I'm not sure how to get this to be editable in my PyCharm project, but this also seems like a workaround anyway. Is there a way to do this within the AWS Explorer tool?
No, currently (Oct 2019) you can't download a Lambda Function's source and edit it locally. If you know the name of the S3 object where the code is stored, you could pull that file down adn make changes, re-zip it, re-upload it back to S3, force the Lambda to cold-start (change the memory slider) and it will pick up the new code. but this is extremely brittle.
Have you tried cloud9, I find it the best way to work on lambdas, especially if you are working as a team. but the problem with cloud9 is also it seems it's not actively being developed and you have lots of manual work to update SAM and dev tools in there. Anyhow I still recommend cloud9.

How to get Apache Superset to run on a specified path

I am running Apache Superset at the following address:
http://superset.example.com:8088
That gets redirected to:
http://superset.example.com:8088/superset/welcome
Ideally, users would get redirected to:
http://superset.example.com:8088/welcome
How can that be accomplished? As well I would like for it to run under port 80 so the port doesn't need to be specified but I haven't been able to do that either.
This issue covers what you're talking about:
https://github.com/apache/incubator-superset/issues/985
which led to this closed PR:
https://github.com/apache/incubator-superset/pull/1866
You can try to reopen the PR and finish it, or you can try configuring nginx like this guy suggests.
I found it very frustrating to setup a base url for superset. If you want to save some time, I condensed a couple of comments into a working example here: https://github.com/komoot/superset-reverse-nginx-example
Below is the way I eventually made it to run on an endpoint other than '/'. But my use case is to make it work on AWS Lambda in Serverless environment.
Eventually what i did was the below to make it work:
In config.py i have added another configuration variable and used this variable in locations where redirect or appbuilder.add_link has been used.
In templates folder there are places where directly '/superset/' has been used. So, even if i did first step right, the templates are not rendering in right way. So, i have to go and change the template as well (As of now I have hard-coded this. I need to make it configurable)
In front-end i have added a file called config.ts and I have used this config in locations wherever redirect was done in front-end. This has fixed up all my front-end links.
Only thing remaining for me was fixing "Upload CSV to Database" Link. When we click this link and enter the data, since Lambda doesn't allow any writes i tried to write to /tmp - but since we don't know whether the next request is going to be served by same lambda or not... so this is an issue as of now. The way I am planning to fix this is to write the files to s3 instead of local folder. I am still figuring out a way to do this.
-- No more nginx or other links. We don't even need gunicorn in this setup.
Thanks

Google Cloud Storage - files not showing

I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.