I have a query related to the google cloud storage for julia application.
Currently, I am hosting a julia application (docker container) on GCP and would like to allow the app to utilize cloud storage buckets to write and read the data.
I have explored few packages which promise to do this operation.
GoogleCloud.jl
This package in the docs show a clear and concise representation of the implementation. However, adding this package result in incremental compilation warning with many of the packages failing to compile. I have opened an issue on their github page : https://github.com/JuliaCloud/GoogleCloud.jl/issues/41
GCP.jl
The scope is limited, currently the only support is for BigQuery
Python package google
This is quite informative and operational but will take a toll on the code's performance. But do advise if this is the only viable option.
I would like to know are there other methods which can be used to configure a julia app to work with google storage?
Thanks look forward to the suggestions!
GCP.jl is promising plus you may be able to do with gRPC if Julia support gRPC (see below).
Discovery
Google has 2 types of SDK (aka Client Library). API Client Libraries are available for all Google's APIs|services.
Cloud Client Libraries are newer, more language idiosyncratic but only available for Cloud. Google Cloud Storage (GCS) is part of Cloud but, in this case, I think an API Client Library is worth pursuing...
Google's API (!) Client Libraries are auto-generated from a so-called Discovery document. Interestingly, GCP.jl specifically describes using Discovery to generate the BigQuery SDK and mentions that you can use the same mechanism for any other API Client Library (i.e. GCS).
NOTE Explanation of Google Discovery
I'm unfamiliar with Julia but, if you can understand enough of that repo to confirm that it's using the Discovery document to generate APIs and, if you can work out how to reconfigure it for GCS, this approach would provide you with a 100% fidelity SDK for Cloud Storage (and any other Google API|service).
Someone else tried to use the code to generate an SDK for Sheets and had an issue so it may not be perfect.
gRPC
Google publishes for the subset of its services that support gRPC. If you'd prefer to use gRPC, it ought be possible to use the Protobufs in Google's repo to define a gRPC client for Cloud Storage
Related
We're new to Amazon Seller Partner-API. Need to invoke certain Amazon SP-APIs for an integration workflow. For some internal reasons, using Amazon SDKs is a secondary option. With our conventional approach, we're able to interact with most APIs, in this case the AWS Request signing & Signature generation is where we're stuck.
As per Amazon using SDK handles it all internally. Is it possible to use a command line utility like - AWS CLI to interact with SP-APIs? Not sure if this is feasible. Found this - amazon-sp-api but not sure if it is stable / reliable.
I believe there should be ways to interact with SP-API from command line. If not, atleast there should be a tool that is able to produce AWS Request signature (given the request info, key etc...).
Kindly share your experience and expertise. We're new to AWS, so if I'm confusing AWS with SP-API (esp for Request signing - I believe both use the same mechanism) pls point it out.
The link you shared to amz.tools does not look like a command line interface. It is just an SDK generated in NodeJS. There is not way to connect to the API via command line. You can use Postman if you want to avoid SDKs.
And yes, AWS is not the same thing as SP API.
You can search github for SDKs generated on other languages; some seem to have a lot of use.
We generated our own SDK in C# because others didn't fit out criteria.
We're looking to leverage the Neptune Gremlin client library to get load balancing and refreshes automatic.
There is a blog article here: https://aws.amazon.com/blogs/database/load-balance-graph-queries-using-the-amazon-neptune-gremlin-client/
This is also a repo containing the code here:
https://github.com/awslabs/amazon-neptune-tools/tree/master/neptune-gremlin-client
However, the artifacts aren't published anywhere. Is it still possible to do this? Ideally, we avoid vendoring the code into our codebase since we would then forefeit updates.
The artifacts for several of the tools in that repo can be found here.
https://github.com/awslabs/amazon-neptune-tools/releases/tag/amazon-neptune-tools-1.2
Ours is a Spring-Boot based application. For integration with AWS SNS and SQS, we have couple of options:
Use Spring-Cloud-AWS
Use AWS-SDK-Java 2
I wanted to know if there is any advantage in using one or the other.
When I ask AWS guys, they tell me that AWS SDK gets updated regularly and integration with SNS and SQS is not difficult. Hence, there is no need to integrate with Spring-Cloud-AWS.
I tried searching on gitter channel for Spring-Cloud and could not find any relevant information. Documentation does state that I can update the AWS-SDK version. Documentation does not state any compelling reason for not using AWS-SDK directly.
If anyone has some insights, please share.
From the AWS Spring Team:
"From now on, our efforts focus on 3.0 - based on AWS SDK 2.0."
So, if you need AWS SDK 2.0, you probably want to go directly with the SDK.
https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available
For more on what's new on AWS Java SDK 2.0:
https://aws.amazon.com/blogs/developer/aws-sdk-for-java-2-0-developer-preview/
The main advantage over the AWS Java SDK is the Spring style convenience and head start we get by using the Spring project. As per the project documentation (https://cloud.spring.io/spring-cloud-aws/reference/html/##using-amazon-web-services)
Using the SDK, application developers still have to integrate the SDK
into their application with a considerable amount of infrastructure
related code. Spring Cloud AWS provides application developers already
integrated Spring-based modules to consume services and avoid
infrastructure related code as much as possible.
I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.
Is there an option to use gcloud commands programmatically via - Java?
I see not all of the google provided libraries have all the functionalities that are present as part of gcloud command.
Ugh :-(
You're correct but there's a better solution.
Please see this explanation:
https://cloud.google.com/apis/docs/client-libraries-explained
I discourage you from shelling out to calling the gcloud commands from Java to solve your problem and from attempting to make the calls directly.
In summary:
All of Google's services are available for all of the supported languages using the older client libraries called the API Client Libraries. The API Client Libraries are machine-generated and mostly guaranteed to perfectly reflect the underlying services:
https://developers.google.com/api-client-library/
For Google Cloud Platform, a newer (better) set of client libraries is available but, these have required hand-coding. It's not a good excuse but, because of this, these libraries have lagged their underlying services. The lag includes some services not being available in some languages, some methods of some services not being available etc.
https://cloud.google.com/apis/docs/cloud-client-libraries
So, this creates a few challenges but...
If you require one set of libraries for everything, go API Client Libraries
Otherwise:
If the Cloud Client Library is available, use it.
If the Cloud Client Library is not available, use the API Client Library.
url :
cloud.google.com/compute/docs/containers/deploying-containers
I found below description
"You can only use this feature through the Google Cloud Platform Console or the gcloud command-line tool, not the API."
here its not possible using api but only possible use gcloud command line.
so using gcloud command from java is still issue.