How to deploy large nodejs package to AWS Lambda? - amazon-web-services

I am trying to deploy a simple script to AWS Lambda that would generate critical css for a website. Running this serverless seems to make sense (but I cannot find any working examples).
The problem is with package size. I am trying to use https://github.com/pocketjoso/penthouse. When I simply npm install penthouse suddenly the package size is over 300MB. Size limit on Lambda is only 250MB and it will not upload.
Is there any way to solve this? Perhaps download penthouse on the fly? If so, is there any example?
Performance is not so critical in this case as it would be called only a few times a day by an automated process.

Looking at the bundle size of the package (https://bundlephobia.com/result?p=penthouse), it doesn't appear that your issue is primarily with the penthouse package. Although I cannot say for certain, I think it's mainly down to the size of your other dependencies.
Nevertheless, seen as this isn't a critical system and will be accessed a few times a day via automation processes, you can reduce the size of your node_modules folder by using a CDN.
There are a number of services which allow you to do this, I have primarily used UNPKG and jsDelivr in the past as they appear to be reliable with minimal-to-no downtime.
I lack the required detail from your question regarding which technology you're specifically using and the extent you can go to in order to achieve your desired result, but there are a few options you can choose:
Utilise webpack's externals configuration:
https://webpack.js.org/configuration/externals/
Use a CDN library loader such as: https://www.npmjs.com/package/import-cdn-js
Or https://www.npmjs.com/package/from-cdn
loadjs is another option: https://github.com/muicss/loadjs
scriptjs https://www.npmjs.com/package/scriptjs
I don't know much about penthouse but with scriptjs, I assume you can achieve something like this:
var penthouseScript = require("scriptjs");
penthouseScript("https://cdn.jsdelivr.net/npm/penthouse#2.2.2/lib/index.min.js", () => {
// penthouse related code
penthouse({
url: 'http://google.com',
cssString: 'body { color: red }'
})
.then(criticalCss => {
// use the critical css
fs.writeFileSync('outfile.css', criticalCss);
});
});

Related

How to configure OpenTelemetry agent for an Akka application

I am trying to export metrics and traces from my Akka app written in Scala using OpenTelemetry agent with the purpose of consuming the data in OpenSearch.
Technology stack for my application:
Akka - 2.6.*
RabbitMQ (amqp client 5.12.*)
PostgreSQL (jdbc 42.2.*)
I've added OpenTelemetry instrumentation runtime dependency to build.sbt:
val runtimeDependencies: Seq[ModuleID] = Seq(
"io.opentelemetry.instrumentation" % "opentelemetry-instrumentation-api" % otelInstrumentationVersion % "runtime"
)
...
libraryDependencies ++= compileDependencies ++ testDependencies ++ runtimeDependencies,
I am passing OpenTelemetry configurations in a properties file:
export JAVA_OPTS="... \
-javaagent:lib/opentelemetry/opentelemetry-javaagent-all-v1.6.0.jar \
-Dotel.javaagent.configuration-file=lib/opentelemetry/otel.properties"
The only other related piece in my code is the properties file:
otel.service.name=my-app
otel.traces.exporter=jaeger
otel.propagators=jaeger
I do receive some traces in OpenSearch, but they are disparate and unrelated whereas I would expect them to be linked. For example a message is received on RabbitMQ topic, it makes it's way into an actor, the latter eventually issues a SQL query. As a result I could see for each execution how much time did each step take.
This is an approximate view that I get in OpenSearch:
I would love to be able to follow documentation, but I find that OpenTelemetry's configuration guide is scarce at this point.
Update:
Not sure whether this is relevant, but I get a warning on datapreper:
2021-09-29T16:50:50,861 [raw-pipeline-prepper-worker-5-thread-1] WARN com.amazon.dataprepper.plugins.prepper.oteltrace.OTelTraceRawPrepper - Missing trace group for SpanId: 922097e31cf96c72
Ok so I got around by running across this issue and then reading about how to surpress specific instrumentations.
So to reduce clutter in tracing dashboard, one would add something as following to the properties file (or equivalent via environment variables):
otel.instrumentation.rabbitmq.enabled=false
otel.instrumentation.grpc.enabled=false
Note that I removed the two cluttering instrumentation libraries peculiar for my use case. For another application one wold choose other libraries from link # 2 above. In this way the spans that you as application developer declare will become roots.

Next.js CLI - is it possible to pre-build certain routes when running dev locally?

I'm part of an org with an enterprise app built on Next.js, and as it's grown the local dev experience has been degrading. The main issue is that our pages make several calls to /api routes on load, and those are built lazily when you run yarn dev, so you're always forced to sit and wait in the browser while that happens.
I've been thinking it might be better if we were able to actually pre-build all of the /api routes right away when yarn dev is run, so we'd get a better experience when the browser is opened. I've looked at the CLI docs but it seems the only options for dev are -p (port) and -H (host). I also don't think running yarn build first will work as I assume the build output is quite different between the build and dev commands.
Does anyone know if this is possible? Any help is appreciated, thank you!
I don't believe there's a way to prebuild them, but you can tell Next how long to keep them before discarding and rebuilding. Check out the onDemandEntries docs. We had a similar issue and solved it for a big project about a year ago with this in our next.config.js:
const { PHASE_DEVELOPMENT_SERVER } = require("next/constants")
module.exports = (phase, {}) => {
let devOnDemandEntries = {}
if (phase === PHASE_DEVELOPMENT_SERVER) {
devOnDemandEntries = {
// period (in ms) where the server will keep pages in the buffer
maxInactiveAge: 300 * 1000,
// number of pages that should be kept simultaneously without being disposed
pagesBufferLength: 5,
}
}
return {
onDemandEntries,
...
}
}

AWS Amplify federated google login work properly on browser but dont work on Android

The issues are when I am trying to run federated authentication with the help of amplify auth method on the browser it works fine, but when I try to run it on my mobile.
It throws error No user found when I try to use Auth.currentSession() but the same work on the browser.
tried to search about this type of issue but I found related to ionic-cordova-google-plugin not related to AWS Amplify Federated Login Issue.
Updating the question after closing the question with less debugging information without asking for any information.
This is issues raised in git hub with respect to my problem.
Issue No. 5351 amplify js it's still in open state.
https://github.com/aws-amplify/amplify-js/issues/5351
Another issue 3537 which is still in Open
These two issues has the same scenario like me, I hope its enough debugging information, if more required mention comment instead of closing without notification, it's bullying for a beginner not helping
I fixed the above problem by referring a comment or wrapped around fix.
Link that will take to that comment directly link to comment.
First read the above comment as it will give you overall idea of what exactly the issue is instead of directly jumping to the solution.
Once you read the comment you will be little unclear with respect to implementation as he has use capacitor and not every one are using capacitor.
In my implementation I ignore this part as I am not using capacitor.
App.addListener('appUrlOpen')
Now lets go to main step where we are fixing this issue, I am using deep links to redirect to my application
this.platform.ready().then(() => {
this.deeplinks
.route({
"/success.html": "success",
"/logout.html": "logout",
})
.subscribe(
(match: any) => {
const fragment = JSON.stringify(match).split('"fragment":"')[1];
// this link can be your any link based on your requirement,
// what I am doing it I am passing all the data which I get in my fragments.
// fragments consists of id_token, stage, code,response type.
// These need to be passed to Ionic in order for Amplify to run its magic.
document.location.href = `http://192.168.1.162:8100/#${fragment}`;
},
(nomatch) => {
console.log("Got a deeplink that didn't match", nomatch);
}
);
});
I got this idea by referring the issue in which the developer mentioned of sending code and state along with application deep linking URL.

Java code to get currently running beanstalk version label?

From within a running Java application running on beanstalk, how can I get the beanstalk version label that is currently running?
[Multiple Edits later...]
After a few back-and-forth comments with Sony (see below), I wrote the following code which works for me now. If you put meaningful comments in your version label when you deploy, then this will tell you what you're running. We have a continuous build environment, so we can get our build environment to supply a label that leads to the check-in comments for the related code. Put this all together, and your server can tell you exactly what code its running relative to your source code check-ins. Really useful for us. OK now I'm actually answering my own question here, but with invaluable help from Sony. Seems a shame you can't remove the hard-coded values and query for those at runtime.
String getMyVersionLabel() throws IOException {
Region region = Region.getRegion(Regions.fromName("us-west-2")); // Need to hard-code this
AWSCredentialsProvider credentialsProvider = new ClasspathPropertiesFileCredentialsProvider();
AWSElasticBeanstalkClient beanstalk = region.createClient(AWSElasticBeanstalkClient.class, credentialsProvider, null);
String environmentName = System.getProperty("PARAM2", "DefaultEnvironmentName"); // Need to hard-code this too
DescribeEnvironmentsResult environments = beanstalk.describeEnvironments();
for (EnvironmentDescription ed : environments.getEnvironments()) {
if (ed.getEnvironmentName().equals( environmentName)) {
return "Running version " + ed.getVersionLabel() + " created on " + ed.getDateCreated());
break;
}
}
return null;
}
You can use AWS Java SDK and call this directly.
See the details of describeApplicationVersions API for how to get all the versions in an application.Ensure to give your regions as well (otherwise you will get the versions from the default AWS region).
Now, if you need to know the version deployed currently, you need to call additionally the DescribeEnvironmentsRequest. This has the versionLabel, which tells you the the version currently deployed.
Here again, if you need to know the environment name in the code, you need to pass it as a param to the beanstalk configuration in the aws console, and access as a PARAM.

Coldfusion 10 - Application specific classpaths

I am working with a CF10 application and trying to define application specific classpaths to load JARs using the this.javaSettings feature introduced in CF10.
From Application.cfc:
THIS.javaSettings = {
LoadPaths = [".\java_lib\",".\java\myjar.jar"],
loadColdFusionClassPath = true,
reloadOnChange = false
}
This is working great, and I can define JARs on an application basis. However, every time I reload the application (for example, if I call applicationStop()) then CF seems to hold on to all the loaded JARs/classes at the same time re-loading them all - which means after a number of reloads I inevitably get an out-of-memory Perm Gen error.
Has anyone experienced this? I have tried the usual things by updating GC strategies to enable permgen collection:
-XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Ok, this was not an issue with the CF feature - turns out that the memory leak was originating in the groovy code that had been compiled in to a jar (you can read groovy details here: https://stackoverflow.com/a/17952925/258813)
It appears as though the CF10 hot-reloading of jars is working ok!