Building Docker images from Google Cloud Functions - google-cloud-platform

As part of my CD pipeline, I am setting up a Google Cloud Function to handle new repo pushes, create docker images and push them to the registry. I have all working on a VM but there is no need to have one running 24x7 just for this.
So, looking over NodeJS reference library, I can't find a way to push an image to a registry using node. Seems like there is no registry or build sdk for node?
Basically, all I need is to execute this command from a cloud function:
gcloud builds submit --tag gcr.io/my_project/my_image.

It's quite possible to do this using the Cloud Build API. Here's a simple example using the client libary for Node.js.
exports.createDockerBuild = async (req, res) => {
const google = require('googleapis').google;
const cloudbuild = google.cloudbuild({version: 'v1'});
const client = await google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
const projectId = await google.auth.getProjectId();
const resource = {
"source": {
"storageSource": {
"bucket": "my-source-bucket",
"object": "my-nodejs-source.tar.gz"
}
},
"steps": [{
"name": "gcr.io/cloud-builders/docker",
"args": [
"build",
"-t",
"gcr.io/my-project-name/my-nodejs-image",
"standard-hello-world"
]
}],
"images": ["gcr.io/$PROJECT_ID/my-nodejs-image"]
};
const params = {projectId, resource, auth: client};
result= await cloudbuild.projects.builds.create(params);
res.status(200).send("200 - Build Submitted");
};
My source code was in a bucket but you could pull it from a repo just as easily.
Bear in mind that you'll need to use the Node.js 8 beta runtime for the async stuff to work.

Related

How to make a S3 deployment with the modern version of CodePipeline

I am trying to setup a brand new pipeline with the last version of AWS CDK for typescript (1.128).
The creation of the pipeline is pretty straight forward. I have added sources and build stages with no issues. The objective here is to have an automatic deployment of a static landing page.
So far I have this piece of code:
const landingPageStep = new ShellStep(`${PREFIX}LandingPageCodeBuildStep`, {
input: CodePipelineSource.connection(`${GIT_ORG}/vicinialandingpage`, GIT_MAIN, {
connectionArn: GIT_CONNECTION_ARN, // Created using the AWS console
}),
installCommands: [
'npm ci',
],
commands: [
'npm run build',
],
primaryOutputDirectory: 'out',
})
const pipeline = new CodePipeline(this, `${PREFIX}Pipeline`, {
pipelineName: `${PREFIX}Pipeline`,
synth: new ShellStep(`${PREFIX}Synth`, {
input: CodePipelineSource.connection(`${GIT_ORG}/viciniacdk`, GIT_MAIN, {
connectionArn: GIT_CONNECTION_ARN, // Created using the AWS console
}),
commands: [
'npm ci',
'npm run build',
'npx cdk synth',
],
additionalInputs: {
'landing_page': landingPageStep,
},
}),
});
The step I am not sure how to achieve it is how to deploy to S3 using the output of "landing_page". With previous versions of Pipelines there was a heavy use of Artifacts objects and CodePipelineActions, something similar to this where sourceOutput is an Artifact object:
const targetBucket = new s3.Bucket(this, 'MyBucket', {});
const pipeline = new codepipeline.Pipeline(this, 'MyPipeline');
const deployAction = new codepipeline_actions.S3DeployAction({
actionName: 'S3Deploy',
stage: deployStage,
bucket: targetBucket,
input: sourceOutput,
});
const deployStage = pipeline.addStage({
stageName: 'Deploy',
actions: [deployAction],
});
Now it is completely different since you have access to FileSet objects and apparently the build steps are intended to be used nesting outputs as the example above. Every output file is saved in a bucket with ugly file names, so it is not intended to be accessed directly neither.
I have seen some hacky approaches replacing ShellStep by CodeBuildStep and using as a postbuild command in the buildspec.yml file something like this:
aws s3 sync out s3://cicd-codebuild-static-website/
But it is resolved in the build stage and not in a deployment stage where it will be ideal to exist.
I have not seen anything insightful in the documentation so any suggestion is welcome. Thanks!
You can extend Step and implement ICodePipelineActionFactory. It's an interface that gets codepipeline.IStage and adds whatever actions you need to add.
Once you have the factory step, you pass it as either pre or post options of the addStage() method option.
Something close to the following should work:
class S3DeployStep extends Step implements ICodePipelineActionFactory {
constructor(private readonly provider: codepipeline_actions.JenkinsProvider, private readonly input: FileSet) {
}
public produceAction(stage: codepipeline.IStage, options: ProduceActionOptions): CodePipelineActionFactoryResult {
stage.addAction(new codepipeline_actions.S3DeployAction({
actionName: 'S3Deploy',
stage: deployStage,
bucket: targetBucket,
input: sourceOutput,
runOrder: options.runOrder,
}));
return { runOrdersConsumed: 1 };
}
}
// ...
pipeline.addStage(stage, {post: [new S3DeployStep()]});
But a way way way simpler method would be to use BucketDeployment to do it as part of the stack deployment. It creates a custom resource that copies data to a bucket from your assets or from another bucket. It won't get its own step in the pipeline and it will create a Lambda function under the hood, but it's simpler to use.

How to use Amazon Cognito without Amplify

I'm just now diving into Cognito. The AWS setup has been fairly straight-forward, easy.
We have a variety of apps, webapps, and services and we'd like those to make use of the Cognito service. I've experience setting up similar with Auth0, but because we've been leveraging a number of Amazon Web Services, it really makes sense to use Cognito as well.
Everywhere I look, every guide eventually references Amplify client-side library and cli. We have existing apps and services, and really don't want to change tooling or import anything unnecessary to add bloat and complexity. Is there a way to use Cognito service without Amplify libraries? Is there a lightweight Cognito-only client library for interfacing with the Cognito service, authentication-and-authorization flow?
Update 03 Dec 2021
After re:Invent 2021, "Amplify Admin UI" was renamed to "Amplify Studio". With extra powers now:
automatically translates designs made in Figma to human-readable React UI component code
https://aws.amazon.com/blogs/mobile/aws-amplify-studio-figma-to-fullstack-react-app-with-minimal-programming/
===============
Original Answer
To start, I want to clarify that "Amplify" is an umbrella term for multiple things. We have:
Amplify Libraries (UI/JS)
Amplify CLI (to create cloud-native applications)
Amplify Console (ci/cd and hosting for full-stack web apps)
Amplify Admin UI (UI to create and configure full-stack web apps)
You can check the homepage for more clarification - https://docs.amplify.aws/
Is there a lightweight Cognito-only client library for interfacing with the Cognito service, authentication-and-authorization flow?
Behind the scenes, Amplify uses amazon-cognito-identity-js library to interface with Amazon Cognito. You can install that directly via npm install amazon-cognito-identity-js.
The source code has been moved to the Amplify Libraries (e.g. amplify-js) repository. Once again, is part of the "Amplify" umbrella under the first category "Amplify Libraries".
Is there a way to use Cognito service without Amplify libraries?
Another approach that you can do, is to use Amazon Cognito as an OAuth server. When you create an Amazon Cognito Hosted UI Domain, it provides you an OAuth 2.0 compliant authorization server.
You can create your own API/Backend for Signup/Login endpoints and exchange tokens/credentials with the Amazon Cognito OAuth server without using aws-sdk or any 3rd party dependency library.
I wrote a walkthrough example, how to configure your User Pool, endpoints that you need to talk to using Node.js, you can find it here: https://github.com/oieduardorabelo/node-amazon-cognito-oauth
You can follow the same idea for any other language.
As mentioned by #oieduardorabelo, you can simply install 'amazon-cognito-identity-js' where you can also find well done examples on npm.
Here is my test code to easily understand this lib. You must have already built the infrastructure on AWS (userPool, userClient and add a new user to test sign in - in my case the user has to change the password on first login so I added this use case on my script):
import * as AmazonCognitoIdentity from 'amazon-cognito-identity-js';
var authenticationData = {
Username: 'email',
Password: 'password',
};
var authenticationDetails = new AmazonCognitoIdentity.AuthenticationDetails(authenticationData);
var poolData = {
UserPoolId: 'us-east-1_userpoolid',
ClientId: '26pjexamplejpkvt'
};
var userPool = new AmazonCognitoIdentity.CognitoUserPool(poolData);
var cognitoUser = userPool.getCurrentUser();
console.log(cognitoUser);
if (!cognitoUser) {
var userData = {
Username: authenticationData.Username,
Pool: userPool
};
var cognitoUser = new AmazonCognitoIdentity.CognitoUser(userData);
cognitoUser.authenticateUser(authenticationDetails, {
onSuccess: function (result) {
var accessToken = result.getAccessToken().getJwtToken();
var idToken = result.idToken.jwtToken;
console.log('Success', accessToken, idToken);
},
newPasswordRequired: function (userAttributes, requiredAttributes) {
delete userAttributes.email_verified;
cognitoUser.completeNewPasswordChallenge('DemoPassword1!', userAttributes, {
onSuccess: (data) => {
console.log(data);
},
onFailure: function (err) {
alert(err);
}
});
},
onFailure: function (err) {
alert(err);
},
});
}
If someone is interested in setup this test project from scratch run:
npm init -y
npm i -D webpack webpack-cli
npm i amazon-cognito-identity-js
in webpack.config.js:
var path = require('path');
module.exports = {
entry: './src/app.js',
mode: 'development',
output: {
path: path.resolve(__dirname, "dist"),
filename: 'main.js',
}
}
Create a new file in ./src/app.js where add the previous amazonCognitoIdentity code with the right AWS info ref and create ./dist/index.html whith:
...
<body>
<script src="main.js"></script>
</body>
in package.json add script "watch":
...
"scripts": {
"watch": "webpack --watch",
}
Finally run it:
npm run watch
and open the index.html directly on the browser with dev console as well.
Hopefully useful for someone.
As a result of research on the topic of using Amazon Cognito without Amplify in React, I came across such a sandbox. Switching from router 5 to router 6 probably won't be a problem. The main gold here is this hook. The rest of the implementation can be found in the sandbox: https://codesandbox.io/s/cognito-forked-f02htu
const Pool_Data = {
UserPoolId: "xxx",
ClientId: "yyy"
};
export default function useHandler() {
const [state, setstate] = useState({
loading: false,
isAuthenticated: false
});
const { loading, isAuthenticated } = state;
const userPool = new CognitoUserPool(Pool_Data);
const getAuthenticatedUser = useCallback(() => {
return userPool.getCurrentUser();
}, []);
console.log(getAuthenticatedUser());
useEffect(() => {
getAuthenticatedUser();
}, [getAuthenticatedUser]);
const signOut = () => {
return userPool.getCurrentUser()?.signOut();
};
console.log("I am here", getAuthenticatedUser()?.getUsername());
return {
loading,
isAuthenticated,
userPool,
getAuthenticatedUser,
signOut
};
}
I wrote an article a couple of years ago explaining how to do this.
The article talks about Amplify but as was mentioned in another response, that's more of an umbrella term, in the article we are using mostly UI components provided by the Amplify project.
you can find it here: https://medium.com/#mim3dot/aws-amplify-cognito-part-2-ui-components-935876fabad3

Selecting region for api in google cloud

just want to know the steps to select different region and zones for api in same project on google cloud console.
I have already tried setting the default location and region
but want to select it everytime api is enabled
There is no feature to choose the location of an API but you can set the location/region when creating a instance of every Google Cloud Products or Services like App Engine, Cloud Function, Compute Engine and etc.
Note that the selected location/region of some services like App Engine cannot be changed once you have deployed your app on it. The way to change it is to create a new project and select the preferred location.
If you are pertaining to this documentation about using the changed default location. I believe this is applicable only for Compute Engine resources. I would recommend that you should always check the default region and zone or the selected location settings when creating and managing your resources.
The default zone and region of compute engine are saved in the metadata, so you should change them or set them from there.
You should use the following API: projects.setCommonInstanceMetadata
https://cloud.google.com/compute/docs/reference/rest/v1/projects/setCommonInstanceMetadata
Example in node js:
async function addDefaultRegion(authClient, projectName) {
var request = {
project: projectName + "1",
resource: {
"items": [
{
"key": "google-compute-default-region",
"value": "europe-west1"
}
]
},
auth: authClient
};
compute.projects.setCommonInstanceMetadata(request, function(err, response) {
if (err) {
console.error(err);
return;
}
console.log(JSON.stringify(response, null, 2));
});
};
async function authorize() {
const auth = new google.auth.GoogleAuth({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
return await auth.getClient();
}

Use aws-sdk in alexa skill

I am trying to develop an Alexa skill, that fetches information from a DynamoDB database. In order to use that I have to import the aws-sdk.
But for some reason when I import it, my skill stops working. The skill does not even open. My code is hosted from the Alexa Developer Console.
Here's what happens:
In the testing panel, when I input 'Open Cricket Update' (the app name), Alexa's response is, 'There was a problem with the requested skill's response'.
This happens only when I import the aws-sdk.
What am I doing wrong?
index.js
const Alexa = require('ask-sdk-core');
const AWS = require('aws-sdk');
AWS.config.update({region:'us-east-1'});
const table = 'CricketData';
const docClient = new AWS.DynamoDB.DocumentClient();
const LaunchRequestHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
},
handle(handlerInput) {
const speakOutput = 'Hello! Welcome to cricket update.';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
package.json
{
"name": "hello-world",
"version": "1.1.0",
"description": "alexa utility for quickly building skills",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Amazon Alexa",
"license": "ISC",
"dependencies": {
"ask-sdk-core": "^2.6.0",
"ask-sdk-model": "^1.18.0",
"aws-sdk": "^2.326.0"
}
}
You are missing the exports.handler block at the end of your index.js that "builds" the skill composed from your handlers, e.g.
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(LaunchRequestHandler)
.lambda();
A more complete example can be found here

How can I publish to a MQTT topic in a Amazon AWS Lambda function?

I would like to have an easy command like I use in the bash to publish something to a topic on MQTT inside a AWS Lambda function. Along the lines of:
mosquitto_pub -h my.server.com -t "light/set" -m "on"
Background: I would like to turn a lamp on and off with Alexa. Alexa can start a Lambda function, and inside of this Lambda function I would like to start an MQTT publish, because the lamp can listen to a MQTT topic and react on the messages there.(Maybe there are easier solutions, but we are in a complicated (university) network which makes many other approaches more difficult)
If you are using Python, I was able to get an AWS Lambda function to publish a message to AWS IoT using the following inside my handler function:
import boto3
import json
client = boto3.client('iot-data', region_name='us-east-1')
# Change topic, qos and payload
response = client.publish(
topic='$aws/things/pi/shadow/update',
qos=1,
payload=json.dumps({"foo":"bar"})
)
You will also need to ensure that the Role (in your Lambda function configuration) has a policy attached to allow access to IoT publish function. Under IAM -> Roles you can add an inline policy to your Lambda function Role like:
{
"Version": "2016-6-25",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Publish"
],
"Resource": [
"*"
]
}
]
}
If you're using Node.js, this will work -
var AWS = require('aws-sdk');
var iotdata = new AWS.IotData({ endpoint: '*****************.iot.us-east-1.amazonaws.com' });
exports.handler = async(event) => {
console.log("Event => " + JSON.stringify(event));
var params = {
topic: "MyTopic",
payload: JSON.stringify(event),
qos: 0
};
return iotdata.publish(params, function(err, data) {
if (err) {
console.log("ERROR => " + JSON.stringify(err));
}
else {
console.log("Success");
}
}).promise();
};
Remember to add iot:publish permission to the role used by this lambda function.
The AWS SDK has two classes to work with IoT: Iot and IotData. IotData.publish is the method you are looking for. It looks like the Iot object is for working with things and IotData is for working with MQTT and shadows. This ought to be directly referenced in the documentation on MQTT and shadows, but it isn't.
This service (IotData) is also available in the CLI.
The previous post about nodeJS send the message 2 times for me.
Correction is here
var mqttParams = {
topic: topicName,
payload: JSON.stringify(event),
qos: 1
};
const request = iotdata.publish(mqttParams);
request
.on('success', () => console.log("Success"))
.on('error', () => console.log("Error"))
return new Promise(() => request.send());
This worked for me using Rust.
main.rs:
use lambda_http::aws_lambda_events::serde_json;
use lambda_runtime::{service_fn, Error, LambdaEvent};
use serde_json::Value;
#[tokio::main]
async fn main() -> Result<(), Error> {
let func = service_fn(func);
lambda_runtime::run(func).await?;
Ok(())
}
async fn func(_event: LambdaEvent<Value>) -> Result<(), Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_iotdataplane::Client::new(&config);
let publish = client
.publish()
.topic("topic")
.qos(1)
.payload(aws_smithy_types::Blob::new("payload"));
publish.send().await?;
Ok(())
}
Cargo.toml:
[package]
name = "MyLambda"
version = "0.1.0"
edition = "2021"
[dependencies]
lambda_http = { version = "0.7", default-features = false, features = ["apigw_http"] }
lambda_runtime = "0.7"
tokio = { version = "1", features = ["macros"] }
aws-config = "0.51.0"
aws-sdk-iotdataplane = "0.21.0"
aws-smithy-types = "0.51.0"
If you use Node.js, you need to install the mqtt library. The following steps help you download and install mqtt library on AWS Lambda.
Download and install Node.js and npm on your PC.
Download MQTT library for node.js.
Unzip it at the nodejs directory that Node.js was installed. (In Windows 10 x64, nodejs directory is C:\Program Files\nodejs)
Create a folder to store the mqtt installed files. For example, D:\lambda_function.
Run Command Prompt as administrator, change directory to nodejs directory.
Install mqtt library to D:\lambda_function.
C:\Program Files\nodejs>npm install --prefix "D:\lambda_function” mqtt
Here's a similar project.
Here is a simple JavaScript code using async await:
const AWS = require("aws-sdk");
exports.handler = async event => {
try {
const iotData = new AWS.IotData({ endpoint: "IOT_SERVICE_ID-ats.iot.eu-central-1.amazonaws.com" });
const params = {
topic: 'some/topic',
payload: JSON.stringify({ var1: "val1" })
}
result = await iotData.publish(params).promise();
console.log(result);
return { statusCode: 200, body: `IoT message published.` };
} catch (e) {
console.error(e);
return { statusCode: 500, body: `IoT message could not be published.` };
}
};
Don't forget to give this lambda the required iot:publish permission to publish to this IoT topic.