I am having a simple google cloud function, that creates a json file from a json string using the below code.
final_json = "{
'InclusionFilenames': [
'Inc_File1.csv',
'Inc_File2.csv',
'Inc_File3.csv',
'Inc_File4.csv'
],
'ExclusionFilenames': [
'Exc_File1.csv',
'Exc_File2.csv',
'Exc_File3.csv',
'Exc_File4.csv'
]
}"
with open("sample.json", "w") as outfile:
json.dump(final_json,outfile)
When i trigger the function, the function runs without any error. But i am not able to find the sample.json file anywhere. What is the default storage path for the files created like in above code?
Cloud Functions write to fs under /tmp. Did you try that path already?
Related
I'm creating a lambda function using an existing module. Currently I refer the static arn inside the step function definition json file. I need to refer it dynamically i.e. at runtime whatever is the arn created. Here's the code:
module "student_lambda"{
source = git#github...// Some git repo
//Other info like vpc, runtime, memory, etc
}
How can I refer this student_lambda arn in my json file for step function?
Here's the json file snippet
"Student lambda response": {
"Type":"Task",
"Resource":"arn:aws:states:::lambda:invoke",
"Parameters":{
"Payload"......// Generic code
"FunctionName":"arn:aws:lambda:us-east-2:..."// Here I want to use something like Student_Lmabda.arn
}}
Note: module is declared in main.tf file. The step function json file is in another folder.
I am assuming the file structure is something like this.
=====================
main.tf
variables.tf
folder with json/
-json file
modules
=====================
In order for us to achieve this, we can create an output of the lambda function that we are creating within the output.tf file in the module.
output "lambda_arn" {
value = aws_lambda.<name of the lambda fn>.arn
}
once this is done we can refer the variable using
"Student lambda response": {
"Type":"Task",
"Resource":"arn:aws:states:::lambda:invoke",
"Parameters":{
"Payload"......
"FunctionName":"${module.student_lambda.lambda_arn}"
}}
I have a postman collection, with a set of three API calls I'd like to chain together and feed with a data file using the runner function. Lets say they're:
/prepareUpload
/upload
/confirmUpload
and the output of each is needed for the next step. I'm happily pulling stuff out of the responses and putting them into variables ready for the next call, but the bit I seem to be falling down on is the /upload needs a file parameter of type file, but Postman doesn't seem to let me set it to a variable:
I've tried exporting the collection, manually editing the json to force it to a variable and running that, so something like :
<snip>
{
"key": "file",
"contentType": "{{contentType}}",
"type": "file",
"src": ["{{fullpath}}"]
}
],
"options": {
"formdata": {}
}
where {{contentType}} and {{fullpath}} are coming from my data file, but it never seems to actually do the upload.
Does anyone know if this is possible?
Issue:
In postman if we check the UI, we notice that there is no way to define file path as variable.
This looks like a limitation when we need to run file from different systems
Solution:
The solution is to hack the collection.json file. Open the json and edit the formdata src and replace it with a variable, let say file_path so : {{file_path}}
Now in Postman:
in pre request you can below code to set the path
pm.environment.set("file_path","C:/Users/guest/Desktop/1.png")
You can also save it as environment variable directly or pass through cmd using --env-var when using newman.
Note:
set access file from outside working directory as true (settings from top right corner)
It's not possible to read local files with Postman (There are at least two issues concerning that in there tracker on github: 798, 7210)
A workaround would be, to setup a server that provides the file, so you could get the data via a request to that server.
Ok, so found the answer to this, and the short version is - Postman can't do it, but Newman can :
https://github.com/postmanlabs/newman#file-uploads
It's a fair bit more effort to get it set up and working, but it does provide a solution for automating the whole process.
For Postman (as of Version 9.1.5), on Mac os, you can trick postman by naming a file in your shared directory with your variable name (ie. {{uploadMe}}). Then you choose this file (named as the variable) from the file selector and Voilà.
In my case the files I upload are located in the same shared directory and don't forget to set the shared directory in your postman settings.
The solution is quite simple,
Make sure you have the latest version of postman
Go to postman settings to find your working directory and add the desired file to your postman working directory
In the body tab, select formdata
In the pre-request script tab, enter the code below.
pm.request.body.mode = "formdata";
pm.request.body.formdata = {
"key": "preveredKey",
"type": "file",
"src": "fileName.extension"
};
I have a question about a Google Cloud functions triggered by an event on a storage bucket (I’m developing it in Python).
I have to read the data of the file just finalized (a PDF file) on the bucket that is triggering the event and I was looking for the file payload on the event object passed to my function (data, context) but it seems there is not payload on that object.
Do I have to use the cloud storage library to get the file from the bucket ? Is there a way to get the payload directly from the context of the triggered function ?
Enrico
From checking the more complete examplein the Firebase documentation, it indeed seems that the payload of the file is not included in the parameters. That make sense, since there's no telling how big the file is that was just finalized, and if that will even fit in the memory of your Functions runtime.
So you'll have to indeed grab the file from the bucket with a separate call, based on the information in the metadata. The full Firebase example grabs the filename and other info from its context/data with:
exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
const metageneration = object.metageneration; // Number of times metadata has been generated. New objects have a value of 1.
...
I'll see if I can find a more complete example. But I'd expect it to work similarly on raw Google Cloud Functions, which Firebase wraps, even when using Python.
Update: from looking at this Storage/Function/PubSub documentation that the Python binding is apparently based on, it looks like the the path should be available as data['resource'] or as data['name'].
I am using the Cloud Datastore to Cloud Storage Text template from Cloud Dataflow.
My python code correctly submits the request and uses javascriptTextTransformFunctionName to run the correct function in my Google Cloud Storage bucket.
Here is a minimized part of the code that is running
function format(inJson) {
var output = {};
output.administrator = inJson.properties.administrator.keyValue.path[0].id;
return output;
And here is the Json I am looking to format, cut down, but only the other children of "properties."
"properties": {
"administrator": {
"keyValue": {
"path": [
{
"kind": "Kind",
"id": "5706504271298560"
}
]
}
}
}
}
And I am getting this exception:
java.lang.RuntimeException:
org.apache.beam.sdk.util.UserCodeException:
javax.script.ScriptException: TypeError: Cannot read
property "keyValue" from undefined in <eval> at line number 5
I understand what it is saying the error is, but I don't know why its happening. If you take the format function and that json and run it through your browser console you can easily test and see that it pulls out and returns an object with "administrator" equal to "5706504271298560".
I did not found the solution to your problem but I expect to be of some help:
Found this post and this one with the same issue. The first one was fixed installing NodeJS library, the second one changing the kind of quotes for the Java.Type().
Nashorn official docs: call Java.type with a fully qualified Java class name, and then to call the returned function to instantiate a class from JavaScript.
I'm new to WireMoc. How do i go about Downloading a file using WireMoc stubbing framework?
This is what i have so far
var stub = FluentMockServer.Start(new FluentMockServerSettings
{
Urls = new[] { "http://+:5001" },
StartAdminInterface = true
});
stub.Given(
Request.Create()
.WithPath("/myFile")
.UsingPost()
.WithBody("download file"))
.RespondWith(Response.Create()
.WithStatusCode(200)
.WithHeader("Content-Type", "application/multipart")
you can use withBodyFile api of wiremock
stubFor(get(urlEqualTo("/body-file"))
.willReturn(aResponse()
.withBodyFile("path/to/myfile.xml")));
However
To read the body content from a file, place the file under the __files directory. By default this is expected to be under src/test/resources when running from the JUnit rule. When running standalone it will be under the current directory in which the server was started. To make your stub use the file, simply call bodyFile() on the response builder with the file’s path relative to __files:
But you can set custom path while starting wiremock using
wireMockServer = new WireMockServer(wireMockConfig().port(portNumber).usingFilesUnderClasspath("src/main/resources/"));
Now it will look for files in /src/main/resources/__files/
The source for above information
http://wiremock.org/docs/stubbing/
https://github.com/tomakehurst/wiremock/issues/129