Can write to google cloud storage from one machine but not - google-cloud-platform

I have a weird bug where I'm able to write to my cloud storage bucket from one machine but not another. I can't tell if the issue is vercel or if it's my configurations but the app is deployed on vercel so it should be the same no matter where I'm accessing it.
upload.ts
export const upload = async (req: IncomingMessage, userId: string) => {
const storage = new Storage({
// credentials
});
const bucket = storage.bucket(process.env.GCS_BUCKET_NAME as string);
const form = formidable();
const { files } = await parseForm(form, req);
const file = files.filepond;
const { path } = file;
const options = {
destination: `products/${userId}/${file.name}`,
preconditionOpts: {
ifGenerationMatch: 0
}
};
await bucket.upload(path, options);
}
Again, my app is deployed on Vercel and I'm able to upload images on my own machine but can't do it if I try on my phone or another pc/mac. My cloud storage bucket is also public so I should be able to read/write to it from anywhere. Any clues?

Related

uplode image to amazon s3 using #aws-sdk/client-s3 ang get its location

i am trying upload an in image file to s3 but get this error says :
ERROR: MethodNotAllowed: The specified method is not allowed against this resource.
my code using #aws-sdk/client-s3 package to upload wth this code :
const s3 = new S3({
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
}
});
exports.uploadFile = async options => {
options.internalPath = options.internalPath || (`${config.s3.internalPath + options.moduleName}/`);
options.ACL = options.ACL || 'public-read';
logger.info(`Uploading [${options.path}]`);
const params = {
Bucket: config.s3.bucket,
Body: fs.createReadStream(options.path),
Key: options.internalPath + options.fileName,
ACL: options.ACL
};
try {
const s3Response = await s3.completeMultipartUpload(params);
if (s3Response) {
logger.info(`Done uploading, uploaded to: ${s3Response.Location}`);
return { url: s3Response.Location };
}
} catch (err) {
logger.error(err, 'unable to upload:');
throw err;
}
};
I am not sure what this error mean and once the file is uploaded I need to get his location in s3
thanks for any help
For uploading a single image file you need to be calling s3.upload() not s3.completeMultipartUpload().
If you had very large files and wanted to upload then in multiple parts, the workflow would look like:
s3.createMultipartUpload()
s3.uploadPart()
s3.uploadPart()
...
s3.completeMultipartUpload()
Looking at the official documentation, It looks like the new way to do a simple S3 upload in the JavaScript SDK is this:
s3.send(new PutObjectCommand(uploadParams));

Access Amazon S3 public bucket

Hello I am trying to download data from one of an Amazon S3 public bucket.
For example https://registry.opendata.aws/noaa-gfs-bdp-pds/
The bucket has web accessible folder and I want to download the files inside the bucket.
I know I can do this with AWS CLI tool
But I want to know if there anyway to do this with AWs SDK Api (s3 client) (c# visual studio)?
I think the issue is authentication when creating connection to s3 client it requires credentials like access key ,I don't have an AWS account,and the bucket I try to get to is public so
Does anyone know how to access to this public bucket without any credentials via API?
Thanks.
If you specify the AnonymousAWSCredentials as the credentials object, any requests that are made to S3 will be unsigned. After that, interacting with the bucket is done like any other call:
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
namespace S3TestApp
{
class Program
{
static void Main(string[] args)
{
var unsigned = new AnonymousAWSCredentials();
var client = new AmazonS3Client(unsigned, Amazon.RegionEndpoint.USEast1);
var listRequest = new ListObjectsRequest
{
BucketName = "noaa-gfs-bdp-pds",
Delimiter = "/",
};
ListObjectsResponse listResponse;
do
{
listResponse = client.ListObjects(listRequest);
foreach (var obj in listResponse.CommonPrefixes)
{
Console.WriteLine("PRE {0}", obj);
}
foreach (var obj in listResponse.S3Objects)
{
Console.WriteLine("{0} {1}", obj.Size, obj.Key);
}
listRequest.Marker = listResponse.NextMarker;
} while (listResponse.IsTruncated);
}
}
}

gcloud codebuild sdk, trigger build from cloud function

Trying to use the #google/cloudbuild client library in a cloud function to trigger a manual build against a project but no luck. My function runs async and does not throw an error.
Function:
exports.index = async (req, res) => {
const json = // json that contains build steps using docker, and project id
// Creates a client
const cb = new CloudBuildClient();
try {
const result = await cb.createBuild({
projectId: "myproject",
build: JSON.parse(json)
})
return res.status(200).json(result)
} catch(error) {
return res.status(400).json(error);
};
};
I am assuming from the documentation that my default service account is implicit and credentials are sources properly, or it would throw an error.
Advice appreciated.

AWS SDK connection - How is this working?? (Beginner)

I am working on my AWS cert and I'm trying to figure out how the following bit of js code works:
var AWS = require('aws-sdk');
var uuid = require('node-uuid');
// Create an S3 client
var s3 = new AWS.S3();
// Create a bucket and upload something into it
var bucketName = 'node-sdk-sample-' + uuid.v4();
var keyName = 'hello_world.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});
This code successfully loads a txt file containing the words "Hello" in it. I do not understand how this ^ can identify MY AWS account. It does! But how! It somehow is able to determine that I want a new bucket inside MY account, but this code was taken directly from the AWS docs. I don't know how it could figure that out....
As per Class: AWS.CredentialProviderChain, the AWS SDK for JavaScript looks for credentials in the following locations:
AWS.CredentialProviderChain.defaultProviders = [
function () { return new AWS.EnvironmentCredentials('AWS'); },
function () { return new AWS.EnvironmentCredentials('AMAZON'); },
function () { return new AWS.SharedIniFileCredentials(); },
function () {
// if AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set
return new AWS.ECSCredentials();
// else
return new AWS.EC2MetadataCredentials();
}
]
Environment Variables (useful for testing, or when running code on a local computer)
Local credentials file (useful for running code on a local computer)
ECS credentials (useful when running code in Elastic Container Service)
Amazon EC2 Metadata (useful when running code on an Amazon EC2 instance)
It is highly recommended to never store credentials within an application. If the code is running on an Amazon EC2 instance and a role has been assigned to the instance, the SDK will automatically retrieve credentials from the instance metadata.
The next best method is to store credentials in the ~/.aws/credentials file.

AWS S3 copy to bucket from remote location

There is a large dataset on a public server (~0.5TB, multi-part here), which I would like to copy into my own s3 buckets. It seems like aws s3 cp is only for local files or files based in S3 buckets?
How can I copy that file (either single or multi-part) into S3? Can I use the AWS CLI or do i need to something else?
There's no way to upload it directly to S3 from the remote location. But you can stream the contents of the remote files to your machine and then up to S3. This means that you will have downloaded the entire 0.5TB of data, but your computer will only ever hold a tiny fraction of that data in memory at a time (it will not be persisted to disc either). Here is a simple implementation in javascript:
const request = require('request')
const async = require('async')
const AWS = require('aws-sdk')
const s3 = new AWS.S3()
const Bucket = 'nyu_depth_v2'
const baseUrl = 'http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/'
const parallelLimit = 5
const parts = [
'basements.zip',
'bathrooms_part1.zip',
'bathrooms_part2.zip',
'bathrooms_part3.zip',
'bathrooms_part4.zip',
'bedrooms_part1.zip',
'bedrooms_part2.zip',
'bedrooms_part3.zip',
'bedrooms_part4.zip',
'bedrooms_part5.zip',
'bedrooms_part6.zip',
'bedrooms_part7.zip',
'bookstore_part1.zip',
'bookstore_part2.zip',
'bookstore_part3.zip',
'cafe.zip',
'classrooms.zip',
'dining_rooms_part1.zip',
'dining_rooms_part2.zip',
'furniture_stores.zip',
'home_offices.zip',
'kitchens_part1.zip',
'kitchens_part2.zip',
'kitchens_part3.zip',
'libraries.zip',
'living_rooms_part1.zip',
'living_rooms_part2.zip',
'living_rooms_part3.zip',
'living_rooms_part4.zip',
'misc_part1.zip',
'misc_part2.zip',
'office_kitchens.zip',
'offices_part1.zip',
'offices_part2.zip',
'playrooms.zip',
'reception_rooms.zip',
'studies.zip',
'study_rooms.zip'
]
async.eachLimit(parts, parallelLimit, (Key, cb) => {
s3.upload({
Key,
Bucket,
Body: request(baseUrl + Key)
}, cb)
}, (err) => {
if (err) console.error(err)
else console.log('Done')
})