I currently have a parse-server deployment working however every time I upload a file it stores it with the GridStore file adapter instead of the S3 one. My app JS looks like this:
// Example express application adding the parse-server module to expose Parse
// compatible API routes.
var express = require('express');
var ParseServer = require('parse-server').ParseServer;
var path = require('path');
var S3Adapter = require('parse-server').S3Adapter;
var databaseUri = process.env.DATABASE_URI || process.env.MONGOLAB_URI;
if (!databaseUri) {
console.log('DATABASE_URI not specified, falling back to localhost.');
}
var api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
appId: process.env.APP_ID || 'somAppId',
masterKey: process.env.MASTER_KEY || '', //Add your master key here. Keep it secret!
serverURL: process.env.SERVER_URL || 'http://localhost:1337/parse', // Don't forget to change to https if needed
liveQuery: {
classNames: ["Posts", "Comments"] // List of classes to support for query subscriptions
},
filesAdapter: new S3Adapter(
"S31",
"S32",
"bucket-name",
{directAccess: true}
)
});
// Client-keys like the javascript key or the .NET key are not necessary with parse-server
// If you wish you require them, you can set them as options in the initialization above:
// javascriptKey, restAPIKey, dotNetKey, clientKey
var app = express();
// Serve static assets from the /public folder
app.use('/public', express.static(path.join(__dirname, '/public')));
// Serve the Parse API on the /parse URL prefix
var mountPath = process.env.PARSE_MOUNT || '/parse';
app.use(mountPath, api);
// Parse Server plays nicely with the rest of your web routes
app.get('/', function(req, res) {
res.status(200).send('Make sure to star the parse-server repo on GitHub!');
});
// There will be a test page available on the /test path of your server url
// Remove this before launching your app
app.get('/test', function(req, res) {
res.sendFile(path.join(__dirname, '/public/test.html'));
});
var port = process.env.PORT || 1337;
var httpServer = require('http').createServer(app);
httpServer.listen(port, function() {
console.log('parse-server-example running on port ' + port + '.');
});
// This will enable the Live Query real-time server
ParseServer.createLiveQueryServer(httpServer);
After I deploy I do not get any errors. Everything seems to be running fine. However, when I upload a file it goes to my monglab DB instead of my S3 storage. Do I need to do something else to configure this? I'm using the docs as a reference to configure this:
https://github.com/ParsePlatform/parse-server/wiki/Configuring-File-Adapters
I also noticed that on the parse-server repo it does not have an S3FileAdapter:
https://github.com/ParsePlatform/parse-server/tree/master/src/Adapters/Files
Any help will be greatly appreciated!
Thanks,
David
mLab db do not store files. It only stores the link of file. You need to upload files to S3 first. If successful, Parse server will return a filename, then, write the name to the db. I use curl command to check file-writing. If it works, your settings should be okay.
curl -X POST \
-H "X-Parse-Application-Id: YOUR-APP-ID" \
-H "X-Parse-REST-API-Key: YOUR-KEY" \
-H "Content-Type: text/plain" \
-d 'Hello, it is me!' \
http://your-parse-server/parse/files/filename.txt
I'm using this script and it's work fine
filesAdapter: new S3Adapter(
"ACCESS_KEY",
"SECRET_KEY",
"BUCKET_NAME",
"REGION"
)
Facing the exact same problem, when gone through in parse-server/lib/Adapters/AdapterLoader.js, I found filesAdapter allowed custom module with configuration options or function. So I tried with below solution which works fine for me.
var S3Adapter = require('parse-server').S3Adapter;
var api = new ParseServer({
...
filesAdapter: () => {
return new S3Adapter("ACCESS_KEY", "SECRET_KEY", "BUCKET_NAME")
}
...
});
Related
I need to make requests to an API that accepts authentication tokens and I want to be able to use a dynamically generated token by running cmd.exe /c GenerateToken.bat instead of having to run my program and then manually paste the value in Postman every time.
I imagine something that looks like this:
How can I set the value of a HTTP header to contain the stdout output of a program or a batch file?
Short answer is, you can't. This is deliberate, both pre-request and test scripts (the only way, other than a collection runner, to make your environment dynamic) run in the postman sandbox, which has limited functionality.
More information of what is available is in the postman-sandbox Github repository page and in postman docs (scroll to the bottom to see what libraries you can import)
You do have a few options, as described in comments - postman allows sending requests and parsing the response in scripts, so you can automate this way. You do need a server to handle the requests and execute your script (simplest option is probably a small server suporting CGI - I won't detail it here as I feel it's too big of a scope for this answer. Other options are also available, such as a small PHP or Node server)
Once you do have a server, the pre-request script is very simple:
const requestOptions = {
url: `your_server_endpoint`,
method: 'GET'
}
pm.sendRequest(requestOptions, function (err, res) {
if (err) {
throw new Error(err);
} else if (res.code != 200) {
throw new Error(`Non-200 response when fetching token: ${res.code} ${res.status}`);
} else {
var token = res.text();
pm.environment.set("my_token", token);
}
});
You can then set the header as {{my_token}} in the "Headers" tab, and it will be updated once the script runs.
You can do something similar to this from Pre-request Scripts at the collection level.
This is available in postman for 9 different authorization and authentication methods.
this is a sample code taken from this article, that show how to do this in Pre-request Scripts for OAuth2
// Refresh the OAuth token if necessary
var tokenDate = new Date(2010,1,1);
var tokenTimestamp = pm.environment.get("OAuth_Timestamp");
if(tokenTimestamp){
tokenDate = Date.parse(tokenTimestamp);
}
var expiresInTime = pm.environment.get("ExpiresInTime");
if(!expiresInTime){
expiresInTime = 300000; // Set default expiration time to 5 minutes
}
if((new Date() - tokenDate) >= expiresInTime)
{
pm.sendRequest({
url: pm.variables.get("Auth_Url"),
method: 'POST',
header: {
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': pm.variables.get("Basic_Auth")
}
}, function (err, res) {
pm.environment.set("OAuth_Token", res.json().access_token);
pm.environment.set("OAuth_Timestamp", new Date());
// Set the ExpiresInTime variable to the time given in the response if it exists
if(res.json().expires_in){
expiresInTime = res.json().expires_in * 1000;
}
pm.environment.set("ExpiresInTime", expiresInTime);
});
}
Working on a project which integrates Google Cloud's speech-to-text api in an android and iOS environment. Ran through the example code provided (https://cloud.google.com/speech-to-text/docs/samples) and was able to get it to run. Used them as a template to add voice into my app, however there is a serious danger in the samples, specifically in generating the AccessToken (Android snippet below):
// ***** WARNING *****
// In this sample, we load the credential from a JSON file stored in a raw resource
// folder of this client app. You should never do this in your app. Instead, store
// the file in your server and obtain an access token from there.
// *******************
final InputStream stream = getResources().openRawResource(R.raw.credential);
try {
final GoogleCredentials credentials = GoogleCredentials.fromStream(stream)
.createScoped(SCOPE);
final AccessToken token = credentials.refreshAccessToken();
This was fine to develop and test locally, but as the comment indicates, it isn't safe to save the credential file into a production app build. So what I need to do is replace this code with a request from a server endpoint. Additionally i need to write the endpoint that will take the request and pass back a token. Although I found some very interesting tutorials related to Firebase Admin libraries generating tokens, I couldn't find anything related to doing a similar operation for GCP apis.
Any suggestions/documentation/examples that could point me in the right direction are appreciated!
Note: The server endpoint will be a Node.js environment.
Sorry for the delay, I was able to get it all to work together and am now only circling back to post an extremely simplified how-to. To start, I installed the following library on the server endpoint project https://www.npmjs.com/package/google-auth-library
The server endpoint in this case is lacking any authentication/authorization etc for simplicity's sake. I'll leave that part up to you. We are also going to pretend this endpoint is reachable from https://www.example.com/token
The expectation being, calling https://www.example.com/token will result in a response with a string token, a number for expires, and some extra info about how the token was generated:
ie:
{"token":"sometoken", "expires":1234567, "info": {... additional stuff}}
Also for this example I used a ServiceAccountKey file which will be stored on the server,
The suggested route is to set up a server environment variable and use https://cloud.google.com/docs/authentication/production#finding_credentials_automatically however this is for the examples sake, and is easy enough for a quick test. These files look something like the following: ( honor system don't steal my private key )
ServiceAccountKey.json
{
"type": "service_account",
"project_id": "project-id",
"private_key_id": "378329234klnfgdjknfdgh9fgd98fgduiph",
"private_key": "-----BEGIN PRIVATE KEY-----\nThisIsTotallyARealPrivateKeyPleaseDontStealIt=\n-----END PRIVATE KEY-----\n",
"client_email": "project-id#appspot.gserviceaccount.com",
"client_id": "12345678901234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/project-id%40appspot.gserviceaccount.com"
}
So here it is a simple endpoint that spits out an AccessToken and a number indicating when the token expires (so you can call for a new one later).
endpoint.js
const express = require("express");
const auth = require("google-auth-library");
const serviceAccount = require("./ServiceAccountKey.json");
const googleauthoptions = {
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
credentials: serviceAccount
};
const app = express();
const port = 3000;
const auth = new auth.GoogleAuth(googleauthoptions);
auth.getClient().then(client => {
app.get('/token', (req, res) => {
client
.getAccessToken()
.then((clientresponse) => {
if (clientresponse.token) {
return clientresponse.token;
}
return Promise.reject('unable to generate an access token.');
})
.then((token) => {
return client.getTokenInfo(token).then(info => {
const expires = info.expiry_date;
return res.status(200).send({ token, expires, info });
});
})
.catch((reason) => {
console.log('error: ' + reason);
res.status(500).send({ error: reason });
});
});
app.listen(port, () => {
console.log(`Server is listening on https://www.example.com:${port}`);
});
return;
});
Almost done now, will use android as an example. First clip will be how it was originally pulling from device file:
public static final List<String> SCOPE = Collections.singletonList("https://www.googleapis.com/auth/cloud-platform");
final GoogleCredentials credentials = GoogleCredentials.fromStream(this.mContext.getResources().openRawResource(R.raw.credential)).createScoped(SCOPE);
final AccessToken token = credentials.refreshAccessToken();
final string token = accesstoken.getTokenValue();
final long expires = accesstoken.getExpirationTime().getTime()
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
prefs.edit().putString(PREF_ACCESS_TOKEN_VALUE, value).putLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, expires).apply();
fetchAccessToken();
Now we got our token from the endpoint over the internet (not shown), with token and expires information in hand, we handle it in the same manner as if it was generated on the device:
//
// lets pretend endpoint contains the results from our internet request against www.example.com/token
final string token = endpoint.token;
final long expires = endpoint.expires
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
prefs.edit().putString(PREF_ACCESS_TOKEN_VALUE, value).putLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, expires).apply();
fetchAccessToken();
Anyway hopefully that is helpful if anyone has a similar need.
===== re: AlwaysLearning comment section =====
Compared to the original file credential based solution:
https://github.com/GoogleCloudPlatform/android-docs-samples/blob/master/speech/Speech/app/src/main/java/com/google/cloud/android/speech/SpeechService.java
In my specific case I am interacting with a secured api endpoint that is unrelated to google via the react-native environment ( which sits on-top of android and uses javascript ).
I already have a mechanism to securely communicate with the api endpoint I created.
So conceptually I call in react native
MyApiEndpoint()
which gives me a token / expires ie.
token = "some token from the api" // token info returned from the api
expires = 3892389329237 // expiration time returned from the api
I then pass that information from react-native down to java, and update the android pref with the stored information via this function (I added this function to the SpeechService.java file)
public void setToken(String value, long expires) {
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
prefs.edit().putString(PREF_ACCESS_TOKEN_VALUE, value).putLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, expires).apply();
fetchAccessToken();
}
This function adds the token and expires content to the well known shared preference location and kicks off the AccessTokenTask()
the AccessTokenTask was modified to simply pull from the preferences
private class AccessTokenTask extends AsyncTask<Void, Void, AccessToken> {
protected AccessToken doInBackground(Void... voids) {
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
String tokenValue = prefs.getString(PREF_ACCESS_TOKEN_VALUE, null);
long expirationTime = prefs.getLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, -1);
if (tokenValue != null && expirationTime != -1) {
return new AccessToken(tokenValue, new Date(expirationTime));
}
return null;
}
You may notice I don't do much with the expires information here, I do the checking for expiration elsewhere.
Here you have a couple of useful links:
Importing the Google Cloud Storage Client library in Node.js
Cloud Storage authentication
I am looking to add Basic User Authentication to a Static Site I will have up on AWS so that only those with the proper username + password which I will supply to those users have access to see the site. I found s3auth and it seems to be exactly what I am looking for, however, I am wondering if I will need to somehow set the authorization for pages besides the index.html. For example, I have 3 pages- index, about and contact.html, without authentication setup for about.html what is stopping an individual for directly accessing the site via www.mywebsite.com/about.html? I am more so looking for clarification or any resources anyone can provide to explain this!
Thank you for your help!
This is the perfect use for Lambda#Edge.
Because you're hosting your static site on S3, you can easily and very economically (pennies) add some really great features to your site by using CloudFront, AWS's content distribution network, to serve your site to your users. You can learn how to host your site on S3 with CloudFront (including 100% free SSL) here.
While your CloudFront distribution is deploying, you'll have some time to go set up your Lambda that you'll be using to do the basic user auth. If this is your first time creating a Lambda or creating a Lambda for use #Edge the process is going to feel really complex, but if you follow my step-by-step instructions below you'll be doing serverless basic-auth that is infinitely scalable in less than 10 minutes. I'm going to use us-east-1 for this and it's important to know that if you're using Lambda#Edge you should author your functions in us-east-1, and when they're associated with your CloudFront distribution they'll automagically be replicated globally. Let's begin...
Head over to Lambda in the AWS console, and click on "Create Function"
Create your Lambda from scratch and give it a name
Set your runtime as Node.js 8.10
Give your Lambda some permissions by selecting "Choose or create an execution role"
Give the role a name
From Policy Templates select "Basic Lambda#Edge permissions (for CloudFront trigger)"
Click "Create function"
Once your Lambda is created take the following code and paste it in to the index.js file of the Function Code section - you can update the username and password you want to use by changing the authUser and authPass variables:
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Click "Save" in the upper right hand corner.
Now that your Lambda is saved it's ready to attach to your CloudFront distribution. In the upper menu, select Actions -> Deploy to Lambda#Edge.
In the modal that appears select the CloudFront distribution you created earlier from the drop down menu, leave the Cache Behavior as *, and for the CloudFront Event change it to "Viewer Request", and finally select/tick "Include Body". Select/tick the Confirm deploy to Lambda#Edge and click "Deploy".
And now you wait. It takes a few minutes (15-20) to replicate your Lambda#Edge across all regions and edge locations. Go to CloudFront to monitor the deployment of your function. When your CloudFront Distribution Status says "Deployed", your Lambda#Edge function is ready to use.
Deploying Lambda#edge is quiet difficult to replicate via console. So I have created CDK Stack that you just add your own credentials and domain name and deploy.
https://github.com/apoorvmote/cdk-examples/tree/master/password-protect-s3-static-site
I have tested the following function with Node12.x
exports.handler = async (event, context, callback) => {
const request = event.Records[0].cf.request
const headers = request.headers
const user = 'my-username'
const password = 'my-password'
const authString = 'Basic ' + Buffer.from(user + ':' + password).toString('base64')
if (typeof headers.authorization === 'undefined' || headers.authorization[0].value !== authString) {
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: 'Unauthorized',
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
}
}
callback(null, response)
}
callback(null, request)
}
By now, this is also possible with CloudFront functions which I like more because it reduces the complexity even more (from what is already not too complex with Lambda). Here's my writeup on what I just did...
It's basically 3 things that need to be done:
Create a CloudFront function to add Basic Auth into the request.
Configure the Origin of the CloudFront distribution correctly in a few places.
Activate the CloudFront function.
That's it, no particular bells & whistles otherwise. Here's what I've done:
First, go to CloudFront, then click on Functions on the left, create a new function with a name of your choice (no region etc. necessary) and then add the following as the code of the function:
function handler(event) {
var user = "myuser";
var pass = "mypassword";
function encodeToBase64(str) {
var chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
for (
// initialize result and counter
var block, charCode, idx = 0, map = chars, output = "";
// if the next str index does not exist:
// change the mapping table to "="
// check if d has no fractional digits
str.charAt(idx | 0) || ((map = "="), idx % 1);
// "8 - idx % 1 * 8" generates the sequence 2, 4, 6, 8
output += map.charAt(63 & (block >> (8 - (idx % 1) * 8)))
) {
charCode = str.charCodeAt((idx += 3 / 4));
if (charCode > 0xff) {
throw new InvalidCharacterError("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range."
);
}
block = (block << 8) | charCode;
}
return output;
}
var requiredBasicAuth = "Basic " + encodeToBase64(`${user}:${pass}`);
var match = false;
if (event.request.headers.authorization) {
if (event.request.headers.authorization.value === requiredBasicAuth) {
match = true;
}
}
if (!match) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
},
};
}
return event.request;
}
Then you can test with directly on the UI and assuming it works and assuming you have customized username and password, publish the function.
Please note that I have found individual pieces of the function above on the Internet so this is not my own code (other than piecing it together). I wish I would still find the sources so I can quote them here but I can't find them anymore. Credits to the creators though! :-)
Next, open your CloudFront distribution and do the following:
Make sure your S3 bucket in the origin is configured as a REST endpoint and not a website endpoint, i.e. it must end on .s3.amazonaws.com and not have the word website in the hostname.
Also in the Origin settings, under "S3 bucket access", select "Yes use OAI (bucket can restrict access to only CloudFront)". In the setting below click on "Create OAI" to create a new OAI (unless you have an existing one and know what you're doing). And select "Yes, update the bucket policy" to allow AWS to add the necessary permissions to your OAI.
Finally, open your Behavior of the CloudFront distribution and scroll to the bottom. Under "Function associations", for "Viewer request" select "CloudFront Function" and select your newly created CloudFront function. Save your changes.
And that should be it. With a bit of luck a matter of a couple of minutes (realistically more, I know) and especially not additional complexity once this is all set up.
Thanks for the useful post. An alternative to listing the pain text user name and password in the code, and to having base64 encoding logic, is to pre-generate the base64 encoded string. One such encoder, https://www.debugbear.com/basic-auth-header-generator
From there the script becomes simpler. The following is for 'user' / 'password'
function handler(event) {
var base64UserPassword = "Y3liZXJmbG93c3VyZmVyOnRhbHR4cGNnIzIwMjI="
if (event.request.headers.authorization &&
event.request.headers.authorization.value === ("Basic " + base64UserPassword)) {
return event.request;
}
return {
statusCode: 401,
statusDescription: "Unauthorized ",
headers: {
"www-authenticate": { value: "Basic" },
},
}
}
Here is already exists answer how to use Cloudfront functions, but I want to add improved version of the function:
Hardcoded credentials stored as SHA256 hash instead of plain (or base64 that is the same as plain) text. And that is more secure.
It is possible to allow access by whitelisted global IP addresses:
function handler(event) {
var crypto = require('crypto');
var headers = event.request.headers;
var wlist_ips = [
"1.1.1.1",
"2.2.2.2"
];
var authString = "9c06d532edf0813659ab41d26ab8ba9ca53b985296ee4584a79f34fe9cd743a4";
if (
typeof headers.authorization === "undefined" ||
crypto.createHash(
'sha256'
).update(headers.authorization.value).digest('hex') !== authString
) {
if (
!wlist_ips.includes(event.viewer.ip)
) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
"x-source-ip": { value: event.viewer.ip}
}
};
}
}
return event.request;
}
Command below may be used to get correct authString hash value for username user and password password:
printf "Basic $(printf 'user:password' | base64 -w 0)" | sha256sum | awk '{print$1}'
How can I output a clickable URL in the Postman Console (native app) from within a test script?
Like the "https://go.pstmn.io/postman-jobs" when you start the Postman Console.
For the sequence of API calls, you might be using a runner.
So you may store the lat-long values in variable and pass them to the other URL of google maps.
Request 1: You need to store value of lat-long in environment
let resp = pm.response.json();
pm.environment.set("latitude",resp.lat);
pm.environment.set("longitude",resp.longitude);
postman.setNextRequest("Googlemaps"); // need to pass request name(used in postman) which need to be called
Request 2: In runner, for the second request add below line in tests to stop the test:
postman.setNextRequest(null);
This is pretty old but I found a hacky way to do this.
Step 0. Have your URL
There are plenty of ways using the pre/post script features in Postman to generate this from some API output
Step 1. Setup a simple API running on localhost launching the browser
For me I did this using express and node, something like
const express = require('express');
const app = express();
const open = require('open');
const bodyParser = require('body-parser');
const port = 1;
app.use(bodyParser.urlencoded({ extended: true }));
app.get('/open-url', (req, res) => {
open(req.body.q);
res.send('Hello World!');
});
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`);
});
Running on http://localhost:1.
Step 2. Create a GET request in Postman
A simple example
curl --location --request GET 'http://localhost:1/open-url' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'q=http://google.com'
hai i am try to upload an image to Amazon-s3 using react-native-aws-signature, here is my sample code i am attaching
var AWSSignature = require('react-native-aws-signature');
var awsSignature = new AWSSignature();
var source1 = {uri: response.uri, isStatic: true}; // this is uris which got from image picker
console.log("source:"+JSON.stringify(source1));
var credentials = {
SecretKey: ‘security-key’,
AccessKeyId: ‘AccesskeyId’,
Bucket:’Bucket_name’
};
var options = {
path: '/?Param2=value2&Param1=value1',
method: 'POST',
service: 'service',
headers: {
'X-Amz-Date': '20150209T123600Z',
'host': 'xxxxx.aws.amazon.com'
},
region: ‘us-east-1,
body: response.uri,
credentials
};
awsSignature.setParams(options);
var signature = awsSignature.getSignature();
var authorization = awsSignature.getAuthorizationHeader();
here i am declaring the source1 in that response.uri is passing in body which is coming from image picker,Can any one give suggestions that is there any wrong in my code, if there please tell me that how to resolve it,Any help much appreciated
awsSignature.getAuthorizationHeader(); will return the authorization header when given the correct parameters, and that's all it does.Just a step in the whole process of making a signed call to AWS API.
When sending POST request to S3, here is a link to the official documentation that you should read. S3 Documentation
It seems you need to send in the image as a form parameter.
You can also leverage the new AWS Amplify library on the official AWS repo here: https://github.com/aws/aws-amplify
This has a storage module for signing requests to S3: https://github.com/aws/aws-amplify/blob/master/media/storage_guide.md
For React Native you'll need to install that:
npm install aws-amplify-react-native
If you're using Cognito User Pool credentials you'll need to link the native bridge as outlined here: https://github.com/aws/aws-amplify/blob/master/media/quick_start.md#react-native-development