What is Datomic Ions? - clojure

I couldn't understand when I should consider to use Datomic Ions.
What are the benefits besides having plain Clojure + Datomic project setup?

The benefits are listed at https://docs.datomic.com/cloud/ions/ions.html#benefits. Develop your functions at the REPL, deliver them on AWS a minute later.
For an extended walkthrough, check out https://www.youtube.com/watch?v=3BRO-Xb32Ic.

Related

Pros and Cons of Google Dataflow VS Cloud Run while pulling data from HTTP endpoint

This is a design approach question where we are trying to pick the best option between Apache Beam / Google Dataflow and Cloud Run to pull data from HTTP endpoints (source) and put them down the stream to Google BigQuery (sink).
Traditionally we have implemented similar functionalities using Google Dataflow where the sources are files in the Google Storage bucket or messages in Google PubSub, etc. In those cases, the data arrived in a 'push' fashion so it makes much more sense to use a streaming Dataflow job.
However, in the new requirement, since the data is fetched periodically from an HTTP endpoint, it sounds reasonable to use a Cloud Run spinning up on schedule.
So I want to gather pros and cons of going with either of these approaches, so that we can make a sensible design for this.
I am not sure this question is appropriate for SO, as it opens a big discussion with different opinions, without clear context, scope, functional and non functional requirements, time and finance restrictions including CAPEX/OPEX, who and how is going to support the solution in BAU after commissioning, etc.
In my personal experience - I developed a few dozens of similar pipelines using various combinations of cloud functions, pubsub topics, cloud storage, firestore (for the pipeline process state managemet) and so on. Sometimes with the dataflow as well (embedded into the pipelieines); but never used the cloud run. But my knowledge and experience may be not relevant in your case.
The only thing I might suggest - try to priorities your requirements (in a whole solution lifecycle context) and then design the solution based on those priorities. I know - it is a trivial idea, sorry to disappoint you.

Using DynamoDB from CakePHP 3 installed to Elastic Beanstalk

I have installed CakePHP 3 using directions from this tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-cakephp-tutorial.html
It is working perfectly and actually installation was quite easy. There is PHP, CakePHP, MySQL working and also I noticed that the newest AWS SDK as whole is installed in vendor directory. So I am fully set to use also DynamoDB as my data source. You might ask why I should use DynamoDb since I am already using MySQL/MarianDB, this is because we have an application that is already in production and it is using DynamoDB. But we should be able to write admin application using CakePHP in top of DynamoDB. This is not technical decision but coming from business side.
I found good tutorial written by StarTutorial how to use DynamoDB as session handler in CakePHP 3:
https://www.startutorial.com/articles/view/using-amazon-dynamodb-as-session-handler-in-cakephp-3
Well, there is not long way to using DynamoDB for putting data, getting data and doing scans, isn't there? Do you have any simple example how to do it, how to write data to DynamoDB or do scan?
I have also read the article:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.PHP.html
and this is working fine, no problem. But I would like to all the advantages of the CakePHP 3, templating, security and so on, thousands of hours time saved with well written code and very fast to start coding for example admin console :)
Thank you,
You could create a Lambda function (in case you want to go serverless) or any other microservice to abstract communication with your DynamoDB. This will definitely simplify your PHP code. You may call Lambda functions directly (via API Gateway), or post messages to SQS for better decoupling. I would recommend the use of SQS -- you'll need some kind of microservice anyway to consume messages and deal with your DynamoDB in a CQRS fashion. Hope it helps!
Thank you for your answer, I was looking for a example how to use the AWS SDK for DynamoDB without creating more complexity to this environment as it is. This way I would have to create yet another layer without using the SDK that already exists. Can you please give wokring example how AWS SDK is used from CakePHP 3 so that it can use DynamoDB as a data source for its applications without losing it´s own resources an capabilities (MVC, security etc).
Thank you,
After a hard debug and found bugs I was able to get it working with only using AWS SDK in CakePHP 3.

AWS Serverless application load time with the Spring framework

I am building a web application in AWS using the serverless architecture.
The purpose of the application is to expose a public API to upload files from around the world.
I use AWS API-Gateway and Lambda to execute my code and S3 as storage.
I know that it is very much possible and well supported (even by 3rd parties like the Serverless framework) to use Java Spring framework to write the code that I deploy in my Lambda function.
However, is it really recommended? Spring applications usually take 30 seconds or more to load completely and Lambda should run Immediately.
How come this option is even supported by AWS (since it sounds like a very bad idea)?
Java is one of the supported programming languages of AWS Lambda. It is possible to run an application using Java, you just have to take the warmup time into consideration, if that fits your use-case - then use it. You could also use SNS and a hook to your lambda to keep it warm if you do not receive requests
Using Java with AWS lambdas is perfectly fine but Lambdas are functions not applications!
So you should avoid to use a framework like Spring because you don't need that.
The question is what do you want to achieve in your function and why do you need a framework to execute such small amount of code?
What's your use case?
Personally, I would AVOID using java runtime for AWS lambda as much as possible. I understand that it's very tempting to use java assuming that you are looking into migrating an existing implementation into microservices. But you are always going to pay the penalty of slow warm-up time compared to other runtimes. You may also miss out on Java compiler optimisations as the lambda may not be invoked enough number of times to trigger C1 and C2 compilations.
My preference would be only to use java for lambda if you are planning to write a lean implementation, means no spring, hibernate etc. etc.

Clojure: run Datomic with Caribou framework

What do I need to do to run Datomic with Caribou framework, both for dev and prod servers?
In other words, how can I hack Caribou to make it happen?
Hope it makes sense! Thanks you!
I'm one of the the caribou devs.
We use a db protocol to abstract over the differences between databases. I have a long term plan to expand the protocol so that we can use storage that is not sql. Datomic in particular (as well as neo4j). We avoid sql in the model namespace itself, so most of the changes would be on the db adapter protocol. Though the protocol would need to be expanded, and some existing operations would need to be swapped out for the protocol.
If you want to contribute to this, I would be happy to provide some guidance, but the above is a rough outline of what would be needed.
I'm not a Caribou expert, but for what I've seen browsing the source code I don't think it's currently designed for Datomic plug&play.
Most of the critical model querying functions are straight up sql, the same for model creation.
So you could try rewriting the complete model.clj with the same API, which would be difficult, or you can try using model hooks, but that would be a real hack.
I'm not Caribou maintainer, but I think currently it's not designed with Datomic nor any other NoSQL database in mind, as you see by current supported database adapters.

Play! Framework + DynamoDB

Being new with the Play Framework, I'm wondering if it's easier than I think, but is it possible to use DynamoDB with the Play Framework?
As DynamoDB is a NoSQL database, I expect that you would need to use a specific module, which as Dynamo was only recently announced, a module does not exist.
If you are interested in writing your own module, then using Mongo (http://www.playframework.org/modules/mongo-1.3/home) as a starting point (also NoSQL), will give you a good guide on how this has been achieved in other implementations.
Yes, you can use DynamoDB with this library https://github.com/seratch/AWScala It seems to be pretty well