OAuth 2 - Authenticate with JSON file without directory - amazon-web-services

So im trying to set up un lambda from amazon web services a javascript that consults data from google analytics and drops it on a S3 Bucket, so far the script works fine since im authenticating with a client_secrets.json from my computer but now that im trying to do it from lambda i cant figure out how can i copy&paste the JSON content into the script and use it in order to authenticate? The main issue is that is should be able to authenticate itself without human intervention.

In the code i show above if you remove the from_json() and just leave the json_data it would work just fine, hope it helps.

Related

how to continue uploading data including images to rest api like facebook posting

I am working in building a real estate application
using spring boot java and the application in the aws server and images uploading in s3 bucket
what i want is that when the user add a propery if the the user close the app
the uploading task continue in the background and notify when complete
I found a resumeUpload method looks like just you want here.
Also an upload example in the aws s3 document.
Wish this help.

Upload gcs files to google drive using airflow

Hi I am trying to upload a file from GCS to Gdrive using
airflow.contrib.operators.gcs_to_gdrive_operator import GcsToGDriveOperator
This is how the dag looks like
copy_to_gdrive = GcsToGDriveOperator(
task_id="copy_to_gdrive",
source_bucket="my_source_bucket_on_gcs",
source_object="airflow-dag-test/report.csv",
destination_object="/airflow-test/report.csv",
gcp_conn_id="bigquery_default",
dag=dag
)
This code executes without any errors and in the logs I can see the file is downloaded to local successfully and uploaded to gdrive successfully as well.
This code is executed by a service account, the issue i am facing is I am not able to find the file or the directory this dag is creating uploading
I have tried several permutation/combinations of path for "destination_object" but nothing seems to work also google docs are not helpful as well.
I can see in the api logs that that the drive.create api is being called but where it is creating the file is unknown. Has anyone experienced this ? any help or tip would be of great help. Thanks!
Your Service account is a Google account, and, as google account, it has access to its own drive. The file are correctly copied to Drive, but to the drive of the service account!
You never specify the account, so, how Airflow can know that it has to use yours?
Look at the operator documentation
delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
Use this parameter, fill it with your email and activate the domain delegation wide to your service account.

How can I connect my database of Wordpress with lambda?

I am not able to solve a problem:
I want to run a python script which is scraping information from the web according to an input.
This input will come from Wordpress (Either from the frontend or database of Wordpress)
After getting input from, I figured out that AWS Lambda can do the scraping automatically.
Now the problem is, how can I connect my database of Wordpress with Lambda. API Getaway would get information from that Wordpress frontend, save it in DynamoDB and Lambda could then run the Python code. BUT, the input is user specific. So the user should be able to change that input when he wants it.

Sinch Framework - Uploading Call Records in S3

Does anybody have information on how to make sinch framework upload the voice call recordings to AWS s3?
I've created an IAM user on AWS for this, but could not find where to set the AWS credentials so that Sinch uploads the call recording automatically. Is it done on the client side, i.e. IOS code, or done by Sinch team manually? Do we need to change anything on the client side for this behaviour?
Please let me know if you have any information regarding this.
Kind Regards,
Engin
It cannot be set yourself. To do so, send an email to support#sinch.com.

Hiding AWS secret from application

I'm a Java backend engineer working on a feature that the frontend (SPA and Android) must send (large) files to S3. Since I have to manage with a lot of requests. Because of network overload reasons I'm avoiding to make a 'proxy' service where the frontend send me the file so that I can send it to S3 but I have some concern about the best way to keep my apps secure.
I looked for some solutions but I cannot find one that manages exactly what I want.
Amazon S3 upload with not showing secret key in frontend
This post has almost my answer but I don't have enough score to comment.
S3 upload directly in JavaScript
I read some documentation on AWS but I still have some questions and some requisites.
The solution may permit the client an authenticated user to send a file to s3 directly
It may make a GET call to get some token or something like that (without sending a lot of data)
It's to be secure (no secret key knowledge at the frontend)
Which solution may be good for me?
The backend may generate a signing key and send it to frontend making the request to AWS (http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html)
I can use STS to generate a temporary credential for each upload.
Do you think these approach will work? Which one do you think is better? What are the trade offs? Is there other way to deal with this problem?
Best thing to do here is use the Cognito service to generate anonymous credentials in the app that allow an upload to S3. For Android you can use the SDK then to do multi-part uploads from the device to S3, which will speed up the process as well.
I couldn't find an exact Android example, but this is one for iOS and the terminology should transfer the same, just with the other SDK: iOSTransferManager .
You can also call Cognito directly from javascript, if you have a web based app: Cognito in JS example
Hope that helps!
- Chris