I must be missing something because I cant find this option here: https://cloud.google.com/sdk/gcloud/reference/beta/functions/deploy
I want to package and upload my function to a bucket: --stage-bucket
But not actually deploy the function
I'm going to deploy multiple functions (different handlers) from the same package with a Deployment Manager template: type: 'gcp-types/cloudfunctions-v1:projects.locations.functions'
gcloud beta functions deploy insists on packaging AND deploying the function.
Where is the gcloud beta functions package command?
Here is an example of the DM template I plan to run:
resources:
- name: resource-name
type: 'gcp-types/cloudfunctions-v1:projects.locations.functions'
properties:
labels:
testlabel1: testlabel1value
testlabel2: testlabel2value
parent: projects/my-project/locations/us-central1
location: us-central1
function: function-name
sourceArchiveUrl: 'gs://my-bucket/some-zip-i-uploaded.zip'
environmentVariables:
test: '123'
entryPoint: handler
httpsTrigger: {}
timeout: 60s
availableMemoryMb: 256
runtime: nodejs8
EDIT: I realized I have another question. When I upload a zip does that zip need to include dependencies? Do I have to do npm install or pip install first and include those packages in the zip or does cloud functions read my requirements.txt and packages.json and do that for me?
The SDK CLI does not provide a command to package your function.
This link will provide you with detail on how to zip your files together. There are just two points to follow:
File type should be a zip file.
File size should not exceed 100MB limit.
Then you need to call an API, which returns a Signed URL to upload the package.
Once uploaded you can specify the URL minus the extra parameters as the location.
There is no gcloud functions command to "package" your deployment, presumably because this amounts to just creating a zip file and putting it into the right place, then referencing that place.
Probably the easiest way to do this is to generate a zip file and copy it into a GCS bucket, then set the sourceArchiveUrl on the template to the correct location.
There are 2 other methods:
You can point to source code in source repository (this would use the sourceRepository part of the template).
You can get a direct url (using this API) to upload a ZIP file to using a PUT request, upload the code there, and then pass this same URL to the signedUploadUrl on the template. This is the method discussed in #John's answer. It does not require you to do any signing yourself, and likewise does not require you to create your own bucket to store the code in (the "Signed URL" refers to a private cloud functions location).
At least with the two zip file methods you do not need to include the (publicly available) dependencies -- the package.json (or requirements.txt) file will be processed by cloud functions to install them. I don't know about the SourceRepository method but I would expect it would work similarly. There's documentation about how cloud functions installs dependencies during deployment of a function for node and python.
Related
I have my current Cloud Build working, I connect my github repo to trigger the Cloud Build when I push to the main branch which then creates my Cloud Function, but I am confused about the the --source flag. I have read the google cloud function docs. They state that the
minimal source repository URL is: https://source.developers.google.com/projects/${PROJECT}/repos/${REPO}. If I were to input this into my cloudbuild.yaml file, does this mean that I am mimicking the complete path of my github url? I am currently just using . which I believe is just the entire root directory.
my cloudbuild.yaml file:
steps:
- name: "gcr.io/cloud-builders/gcloud"
id: "deploypokedex"
args:
- functions
- deploy
- my_pokedex_function
- --source=.
- --entry-point=get_pokemon
- --trigger-topic=pokedex
- --timeout=540s
- --runtime=python39
- --region=us-central1
Yes you are mimicking the complete path of the Github URL. --source=. means that you are calling the source code in your current working directory. You can check this link on how to configure the Cloud Build deployment.
Also based on the documentation you provided,
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source code will not be updated.
Let me know if you have questions or clarifications.
I am trying to use Cloudformation package to include the glue script and extra python files from the repo to be uploaded to s3 during the package step.
For the glue script it's straightforward where I can use
Properties:
Command:
Name: pythonshell #glueetl -spark # pythonshell -python shell...
PythonVersion: 3
ScriptLocation: "../glue/test.py"
But how would I be able to do the same for extra python files? The following does not work, it seems that I could upload the file using the Include Transform but not sure how to reference it back in extra-py-files?
DefaultArguments:
"--extra-py-files":
- "../glue/test2.py"
Sadly, you can't do this. package only supports for glue:
Command.ScriptLocation property for the AWS::Glue::Job resource
Packaging DefaultArguments arguments is not supported. This means that you have to do it "manually" (e.g. create bash script) outside of CloudFormation.
When I try to run the relevant code in Cloud Shell that would allow the streaming function to be deployed it claims that the source folder containing the streaming function itself does not exist.
The relevant buckets have already been created, it's the function itself which appears not to be there - would it be possible to install this separately maybe?
The original code followed by the error message is given below:
gcloud functions deploy streaming \
--source=./functions/streaming --runtime=python37 \
--stage-bucket=${FUNCTIONS_BUCKET} \
--trigger-bucket=${FILES_SOURCE}
(gcloud.functions.deploy) argument '--source': Provided directory does not exist
As shown in the error message, the path './functions/streaming' does not exist in your cloud shell. You have to pass the absolute path to the directory (in your cloud shell) where your python code is located. Please refer to the documentation below:
The source parameter can take 3 different locations:
--source=SOURCE Location of source code to deploy. Location of the source can be one of the following three options:
Source code in Google Cloud Storage (must be a .zip archive),
Reference to source repository or,
Local filesystem path (root directory of function source).
Note that, depending on your runtime type, Cloud Functions will look
for files with specific names for deployable functions. For Node.js,
these filenames are index.js or function.js. For Python, this is
main.py.
I have a python script that I want to run as a lambda function on AWS. Unfortunately, the package is unzipped bigger than the allowed 250 MB, mainly due to numpy (85mb) and pandas (105mb)
I have already done the following but the size is still too big:
1) Excluded not used folders:
package:
exclude:
- testdata/**
- out/**
- etc/**
2) Zipped the python packages:
custom:
pythonRequirements:
dockerizePip: true
zip: true
If I unzip the zip file generated by serverless package I find a .requriements.zip which contains my python packages and then there is also my virtual environment in the .virtualenv/ folder which contains, again, all the python packages. I have tried to exclude the .virtualenv/../lib/python3.6/site-packages/** folder in serverless.yml, but then I get an Internal server error when calling the function.
Are there any other parameters to decrease the package size?
The .virtualenv/ directory should not be included in the zip file.
If the directory is located in the same directory as serverless.yml then it should be added to exlude in the serverless.yml file, else it gets packaged along with other files:
package:
exclude:
- ...
- .virtualenv/**
include:
- ...
(Are you sure you need pandas and numpy in a microservice? There is nothing "micro" in those libraries).
There is a way. Deploy you Lambda with Zappa https://github.com/Miserlou/Zappa. It's a convenient way to write, deploy and manage your Python Lambdas anyway. But with Zappa you can specify an option called slim_handler. If set to true, most of your code will reside at S3 and will be pulled once a Lambda is executed:
AWS currently limits Lambda zip sizes to 50 megabytes. If your project
is larger than that, set slim_handler: true in your
zappa_settings.json. In this case, your fat application package will
be replaced with a small handler-only package. The handler file then
pulls the rest of the large project down from S3 at run time! The
initial load of the large project may add to startup overhead, but the
difference should be minimal on a warm lambda function. Note that this
will also eat into the memory space of your application function.
I'm doing the tutorial of basic fulfillment and conversation setup of api.ai tutorial to make a chat bot, and when I try to deploy the function with the command:
gcloud beta functions deploy --stage-bucket venky-bb7c4.appspot.com --trigger-http
(where 'venky-bb7c4.appspot.com' is the bucket_name)
It return the following error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Source code size exceeds the limit
I've searched but not found any answer, I don't know where is the error.
this is the JS file that appear in the tutorial:
/
HTTP Cloud Function.
#param {Object} req Cloud Function request context.
#param {Object} res Cloud Function response context.
*/
exports.helloHttp = function helloHttp (req, res) {
response = "This is a sample response from your webhook!" //Default response from the webhook to show it's working
res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type
res.send(JSON.stringify({ "speech": response, "displayText": response
//"speech" is the spoken version of the response, "displayText" is the visual version
}));
};
Neither of these worked for me. The way I was able to fix this was to make sure I was running the deploy from my project directory (the directory containing index.js)
The command creates zip with whole content of your current directory (except node_modules subdirectory) not just the JS file (this is because your function may use other resources).
The error you see is because size of (uncompressed) files in the directory is bigger than 512MB.
The easiest way to solve this is by moving the .js file to its own directory and deploying from there (you can use --local-path to point to directory containing the source file if you want your working directory to be different from directory with function source).
I tried with source option or deploying from the index.js folder and still a different problem exists.
This error usually happens if the code that is being uploaded is large. In my tests I found more than 100MB lead to the mentioned error.
However,
To resolve this there are two solutions.
Update .gcloudignore to ignore the folders which aren't required for your function
Still if option 1 doesn't resolve, you need to create a bucket in storage and mention it with --stage-bucket option.
Create a new bucket for deployment (one time)
gsutil mb my-cloud-functions-deployment-bucket
The bucket you created needs to be unique else it throws already created
Deploy
gcloud functions deploy subscribers-firestoreDatabaseChange
--trigger-topic firestore-database-change
--region us-central1
--runtime nodejs10
--update-env-vars "REDIS_HOST=10.128.0.2"
--stage-bucket my-cloud-functions-deployment-bucket
I had similar problems while deploying cloud functions. What is working for me was specifying the js files source folder.
gcloud functions deploy functionName --trigger-http **--source path_to_project_root_folder**
Also be sure to include all unnecessary folders in .gcloudignore.
Ensure the package folder has a .gitignore file (excluding node_modules).
The most recent version of gcloud requires it in order to not load node_modules. My code size went from 119MB to 17Kb.
Once I've added the .gitignore file, the log printed as well
created .gcloudignore file. See `gcloud topic gcloudignore` for details.