I'm new to AWS including Lambda (and stack over flow for that matter so go easy on me please). I want to be able to get data requests from https://rapidapi.com/api-sports/api/api-football and post the results to my S3 or Dynamo DB instances.
I've attempted creating an AWS Lambda URL function which only succeeds in returning null results. I have tried looking for a straight forward explanation of how to achieve this but im a bit stumped.
So i created a test Lambda URL function by copying the API code supplied by Rapid API (node JS fetch). I pasted it under this line of code export const handler = async(event) => {
So i ended up with this code
export const handler = async(event) => {
const options = {
method: 'GET',
headers: {
'X-RapidAPI-Host': 'api-football-v1.p.rapidapi.com',
'X-RapidAPI-Key': 'MY API KEY',
Authorization: 'Basic Og=='
}
};
fetch('https://api-football-v1.p.rapidapi.com/v3/players?team=42&season=2022&search=saka', options)
.then(response => response.json())
.then(response => console.log(response))
.catch(err => console.error(err))};
I also added the JSON schema provided by rapid API
I run the test in AWS and it says its succeeded but i get the below message and it returns null.
Test Event Name
JSON
Response
null
Function Logs
START RequestId: bfb5ddd4-56f8-466a-b3c1-7ed89f3edc2b Version:
$LATEST
2022-11-24T16:07:33.622Z
203cdbbc-a661-4256-91bb-2ada34d53042
ERROR TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11118:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
cause: ConnectTimeoutError: Connect Timeout Error
at onConnectTimeout (node:internal/deps/undici/undici:6625:28)
at node:internal/deps/undici/undici:6583:50
at Immediate._onImmediate (node:internal/deps/undici/undici:6614:13)
at process.processImmediate (node:internal/timers:471:21) {
code: 'UND_ERR_CONNECT_TIMEOUT'
}
}
END RequestId: bfb5ddd4-56f8-466a-b3c1-7ed89f3edc2b
REPORT RequestId: bfb5ddd4-56f8-466a-b3c1-7ed89f3edc2b Duration: 194.65 ms Billed Duration: 195 ms Memory Size: 128 MB Max Memory Used: 73 MB
Request ID
bfb5ddd4-56f8-466a-b3c1-7ed89f3edc2b
Would anyone know what im doing wrong or be able to point me in the right direction?
Related
I have been struggling with this implementation and figured I'd ask for some community perspective at this point.
I've implemented PubSub with a Lambda successfully and when tested in the cloud I am seeing messages in the IoT test environment. I believe, therefore, that my endpoint is functional.
When trying to implement the Amplify service via the docs (https://docs.amplify.aws/lib/pubsub/getting-started/q/platform/js/) I have been running into all sorts of issues. Worked through the "socket:undefined" issue by reinstalling lock file and node-modules. Now am not getting any errors but it simply is not connecting.
My code is below. Currently when I try to publish I'm getting a response of []. If I try to specify the provider I get this error - "Error: Could not find provider named AWSIoTProvider".
Note: I have been following various SOs - this one most recently:
Amplify PubSub javascript subscribe and publish using cognito authorization: how to?
import Amplify from 'aws-amplify';
import { AWSIoTProvider } from '#aws-amplify/pubsub/lib/Providers';
import PubSub from '#aws-amplify/pubsub';
Amplify.addPluggable(new AWSIoTProvider({
aws_pubsub_region: 'as-southeast-2',
aws_pubsub_endpoint: 'wss://{MY_IOT_ID}-ats.iot.ap-southeast-2.amazonaws.com/mqtt',
}));
Amplify.configure(config);
PubSub.configure();
PubSub.subscribe('myTopic1').subscribe({
next: data => console.log('Message received', data),
error: error => console.error(error),
complete: () => console.log('Done'),
});
Then I have a function that I'm calling for publish that returns the [] if I don't specify the provider and the error above if I specify it (shown below)
Unspecified:
await PubSub.publish('1234-abcd-9876/workitem', { msg: 'Hello to all subscribers!' })
.then(response => console.log('Publish response:', response))
.catch(err => console.log('Publish Pub Err:', err));
Specified:
await PubSub.publish('1234-abcd-9876/workitem', { msg: 'Hello to all subscribers!' }, { provider: 'AWSIoTProvider' })
.then(response => console.log('Publish response:', response))
.catch(err => console.log('Publish Pub Err:', err));
Does anyone have any thoughts as to what I might be doing wrong here or might try next?
Thanks!
Since you are saying the specific error you got is - "Error: Could not find provider named AWSIoTProvider".
so, change the import path for AWSIoTProvider to #aws-amplify/pubsub instead of using #aws-amplify/pubsub/lib/Providers.
import { AWSIoTProvider } from '#aws-amplify/pubsub;
Could anyone please share how to run and test AWS API Gateway and Lambda via Browser and not via Postman or curl.
I am trying to create a simple demo app using HTML + JavaScript (with ajax call to API), and calling the AWS API.
I did tried with postman and curl both are working fine, however when calling from browser (ajax call) it is failing.
Any pointer will be a great.
code snippet :
$.ajax({
type: "POST",
url : URL,
dataType: "json",
mode: 'no-cors',
crossDomain: "true",
contentType: "application/json; charset=utf-8",
data: JSON.stringify(data),
success: function () {
// clear form and show a success message
alert("Your entry is saved Successfuly");
document.getElementById("my-form").requestFullscreen();
location.reload();
},
error: function() {
// show an error message
alert("Seems some issue with the entry, try again.")
}
});
This is a simple demo as of now to get the User Name and other details and save it into DynamoDB via AWS Lambda (Python).
AWS Lambda function is being called but in the Response it fails.
Sample code is https://codepen.io/mayanktripathi4u/pen/QWdrPOG
Tried with various options using JS Fetch; ajax; XMLHttpResponse etc.. non worked.
I have created a project in express
const express = require('express');
const app = express();
const PORT = 5555;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
app.get('/tr', (req, res, next) => {
res.json({ status: 200, data: 'tr' })
});
app.get('/po', (req, res, next) => {
res.json({ status: 200, data: 'po' })
});
module.exports = {
app
};
deployed on cloud function with name my-transaction
and i am scheduling with google clound giving the url like
http://url/my-transaction/po
When I deployed without authentiation scheduler runs job success, but when I do with authentication it fails.
similary if i create a sample project like below
exports.helloHttp = (req, res) => {
res.json({ status: 200, data: 'test hello' })
};
and deploy similary configuring same as above with authentication it works.
only differce is in last function name is similar to entry point means
while above entry point is app with different end points.
any help,
appreciated
Thanks
This is because you need to add auth information to your http requests on cloud Scheduler
First you need to create a service account with the role Cloud Functions Invoker
when you have created the service account, you can see that has a email associated fro example:
cfinvoker#fakeproject.iam.gserviceaccount.com
After that you can create a new scheduler job with auth information by following these steps:
Select target http
Write the url (cloud function url)
Click on "show more"
Select Auth header > Add OIDC token
Write the full email address of the service account
This new job scheduler will be send the http request with the auth infromation to execute successfully your cloud function.
I am using AWS Cognito for Authentication using user pools and I have all my APIs configured on the API gateway. I directly hit cognito from the Angular client, store the tokens returned by Cognito in local storage and use them in subsequent calls.
The problem however is, if the token I send from Angular has expired the Cognito authentication fails and no Integration backend is hit in this case. As a result, I am getting a 401 error in Chrome.
The interesting thing however is that this 401 code is not available to me in the HTTP response object that is passed to Angular. A default 0 code is received by the Angular and this seems to be the case with all the error code received from server (either cognito or backend).
I tried to explore around and found that the issue might be because the gateway is not sending proper CORS headers in the error cases. I have read following related docs but unfortunately I couldn't find out a way to resolve the issue.
Can someone suggest a solution to this.
Edit: I also read somewhere that it is a known AWS issue. Is that the case ?
You can manage the Cognito session before making the call to the API Gateway.
In the example below, the getSession method has a callback that prints out any error messages to the console, and returns.
////// Cognito.js
import {
CognitoUserPool,
CookieStorage
} from 'amazon-cognito-identity-js'
import config from '../config'
const cookieSettings = {
domain: '.xxxxxxxxxxx.com',
secure: true
}
export default {
cookieSettings,
pool: new CognitoUserPool({
UserPoolId: config.UserPoolId,
ClientId: config.ClientId,
Storage: new CookieStorage(cookieSettings)
})
}
//////
////// actions.js
import Cognito from './Cognito'
export function fetchUser() {
// get cognito user cookies
const cognitoUser = Cognito.pool.getCurrentUser()
if (cognitoUser != null) {
// try to get session from cognito
cognitoUser.getSession(function(err, session) {
if (err) {
console.log(err)
return;
}
fetch('https://api-gateway-url', {
headers: {
Authorization: 'Bearer ' + session.getAccessToken().getJwtToken(),
}
})
.then(response => response.json())
.then(json => console.log(json))
})
}
}
Using AWS, I have followed an example of a lambda function using the serverless framework. It is working as expected, but now I wonder what the best way of caching the response is.
My final version of this will consist of one or more json objects that will be retrieved on a regular basis.
The client side will call an api that will retrieve the already cached data.
So what AWS service should I implement to actually make the cache?
If it's static bit of JSON i'd simply import/return it from within the function rather than enable caching, but hey it's your API!
To answer your question you can use caching within API Gateway to do so, documentation can be found here
Update:
I'd actually misread the question so apologies for that, whilst that caching works what the Op is asking is where to store data retrieved - if your retrieving it from an external API you can just write it to s3 like so:
import AWS from 'aws-sdk'
export default (event, context, callback) => {
let s3Bucket = new AWS.S3({ params: { Bucket: 'YOUR_BUCKET_NAME' } })
let documentName = 'someName'
let fileExtension = 'json'
let s3data = {
Key: `${documentName}.${fileExtension}`,
Body: JSON.parse(event.body.someJsonObject),
}
s3Bucket.putObject(s3data, (err, s3result) => {
if (err) {
console.log(err)
callback(err, null)
} else {
console.log('S3 added', s3result)
callback(null, 'done')
}
})
}
Then you just need to read the object back in your serving endpoint.