We are building a customer facing App. For this app, data is being captured by IoT devices owned by a 3rd party, and is transferred to us from their server via API calls. We store this data in our AWS Documentdb cluster. We have the user App connected to this cluster with real time data feed requirements. Note: The data is time series data.
The thing is, for long term data storage and for creating analytic dashboards to be shared with stakeholders, our data governance folks are requesting us to replicate/copy the data daily from the AWS Documentdb cluster to their Google cloud platform -> Big Query. And then we can directly run queries on BigQuery to perform analysis and send data to maybe explorer or tableau to create dashboards.
I couldn't find any straightforward solutions for this. Any ideas, comments or suggestions are welcome. How do I achieve or plan the above replication? And how do I make sure the data is copied efficiently - memory and pricing? Also, don't want to disturb the performance of AWS Documentdb since it supports our user facing App.
This solution would need some custom implementation. You can utilize Change Streams and process the data changes in intervals to send to Big Query, so there is a data replication mechanism in place for you to run analytics. One of the use cases of using Change Streams is for analytics with Redshift, so Big Query should serve a similar purpose.
Using Change Streams with Amazon DocumentDB:
https://docs.aws.amazon.com/documentdb/latest/developerguide/change_streams.html
This document also contains a sample Python code for consuming change streams events.
Is there a way for me to run a cloud function to update some data whenever there is lots of activity to one of my pages? sorry if this is a stupid question.
Cloud Functions does not have any triggers that automatically respond to the load on a web site. You will have to find some other way to gauge that traffic, and then perhaps invoke an HTTP trigger directly.
If you want to see the complete list of trigger types, see the documentation.
as far as I know there is no such thing as a direct trigger between a spike of events in service A and cloud function (unless the service A is itself a cloud function, that would indeed scale).
However it exists second party trigger with stackdriver that allows you to define rules that once hit would deliver an event to a pub/sub topic that a cloud function of your choice can suscribe to. Be cautious with what you intend to do with your cloud function, you may have concurency issues depending the technology of the database you want to modify. In other word, try to be as idempotent as possible.
I think you may be interested by this community tutorial.
(Please feel free to mark this question as duplicate and share pointer to duplicates.)
Hi,
We are developing spring boot based application and will be using docker in production.
Currently it is using MongoDB (Atlas) for storing its log. Looks like MongoDB Cloud will be expensive option to store logs/audit trails.
Since we are going to use AWS, which AWS service we should use to store Log4J Logs and audit messages?
Usually people do store logs in s3, where you can archive logs with a combination of infrequent access and glacier for a reasonable money and you can apply also some life-cycle policy so the logs are automatically removed after a defined amount of time.
If you are looking for some kind of streaming/logging over a network, you may start with some AWS Lambda functions or SQS or you may want to go with some kind of service like https://aws.amazon.com/kinesis/data-firehose/ if you believe that you are really big.
The other advantage of S3 (beside the lowest price) is that most of the other services support reading data from S3. So if you decide later that you want to analyze data with ElasticSearch or Elastic Map-Reduce cluster you will probably have some way how to do it.
Is there any direct service which can be used to write data feeds from Adobe Analytics (Omniture) to Google cloud storage bucket or any alternative solution apart from setting up ftp server on gcp instance?
Unfortunately, there isn't.
Datafeeds currently can either deliver directly to an AWS S3 bucket or FTP/SFTP account (note, I didn't list FTPS as its unsupported).
You'll likely need to setup a jump point somewhere - either in AWS or an FTP site in Google as you suggest. I realize this doesn't answer your question, but I hope it at least gets you moving in the right direction.
A bit late in the party but you might find some help setting up your own Data Feed transfer process and data loading in BigQuery in Python at https://analyticsmayhem.com/adobe-analytics/data-feeds-google-bigquery/. Let me know if you have any questions.
I'm working on a new game project at the moment that will consist of a React Native front-end and a Lambda-based back-end. The app requires some real time features such as active user records, geofencing, etc.
I was looking at Firebase's Realtime Database that looks like a really elegant solution for real-time data sync but I don't think AWS has anything quite like it.
The 3 options I could think of for "serverless" realtime using only AWS services are:
Option 1: AWS IoT Messaging over WebSockets
This one is quite obvious, a managed WebSockets connection through the IoT SDK. I was thinking of triggering Lambdas in response to inbound and outbound events and just use WebSockets as the realtime layer, building custom handling logic on the app client as you typically would.
The downside to this, at least compared to Firebase, is that I will have to handle the data in the events myself which will add another layer of management on top of WebSockets and will have to be standardized with the API data layer in the application's stores.
Pros:
Scalable bi-directional realtime connection
Cons:
Only works when the app is open
Message structure needs to be implemented
Multiple transport layers to be managed
Option 2: Push-triggered re-fetch
Another option is to use push notifications as real-time triggers but use a regular HTTP request to API Gateway to actually get the updated payload.
I like this approach because it sticks to only one transport layer and a single source of truth for application state. It will also trigger updates when the app is not open since these are Push Notifications.
The downside is that this is a lot of custom work with potentially difficult mappings between push notifications to the data that needs to be fetched.
Pros:
Push notifications work even when app is closed
Single source of truth, transport layer
Cons:
Most custom solution
Will involve many more HTTP requests overall
Option 3: Cognito Sync
This is newer to me and I'm not sure if it can actually be interfaced with from the server.
Cognito Sync offers user state sync. across devices complete with offline support and is part of the Cognito SDK which I'll be using anyway. It sounds like just what I'm looking for but couldn't find any conclusive evidence as to whether it is possible to modify, or "trigger", updates from AWS and not just from one of the devices.
Pros:
Provides an abstracted real-time data model
Connected to Cognito user records OOTB
Cons:
Not sure if can be modified or updated from Lambdas
I'm wondering if anyone has experience doing real-time on AWS as part of a Lambda-based architecture and if you have an opinion on what is the best way to proceed?
I asked a similar question to the AWS Support, and this was their response.
My question to them:
What's the group of AWS services (if it's possible) to give that same
in-browser real-time DBaaS feel like Firebase?
AWS Cognito seems to be great for user-accounts. Is there anything
similar for the WebSockets / real-time DB part?
Their response:
To your question, Firebase is closest to the AWS service AWS
MobileHub. You can check out more details below about mobilehub from
below link.
https://aws.amazon.com/mobile/details/
https://aws.amazon.com/mobile/getting-started/
"AWS Cognito seems to be great for user-accounts. Is there anything
similar for the WebSockets / real-time DB part?"
Amazon Dynamodb is a fast and flexible NoSQL database service for all
applications that need consistent, single-digit millisecond latency at
any scale. It is a fully managed cloud database and supports both
document and key-value store models. Its flexible data model, reliable
performance, and automatic scaling of throughput capacity, makes it a
great fit for mobile, web, gaming, ad tech, IoT, and many other
applications.
Amazon Dynamodb can be further optimized with Amazon DynamoDB
Accelerator (DAX) which is a fully managed, highly available,
in-memory cache that can reduce Amazon DynamoDB response times from
milliseconds to microseconds, even at millions of requests per second.
For more information, please see below documentation.
https://aws.amazon.com/dynamodb/getting-started/
https://aws.amazon.com/dynamodb/dax/
Should you have any further questions, please do not hesitate to let
me know.
Thanks.
Best regards,
Tayo O. Amazon Web Services
Check out the AWS Support Knowledge Center, a knowledge base of
articles and videos that answer customer questions about AWS services:
https://aws.amazon.com/premiumsupport/knowledge-center/?icmpid=support_email_category
Also while researching this answer I also found this, looks interesting:
https://aws.amazon.com/blogs/database/how-to-build-a-chat-application-with-amazon-elasticache-for-redis/
The comments to that article is interesting as well.
Jacob Wakeem:
What advantage this
approach have over using aws iot? It seems that iot has all these
functionality without writing a single line of code and with
server-less architecture.
Sam Dengler:
The managed PubSub feature in the AWS IoT
service is also a good approach to message-based applications, like
the one demonstrated in the article. With Elasticache (Redis),
customers who use Pub/Sub are typically also using Redis as a data
store for other use cases such as caching, leaderboards, etc. With
that said, you could also use ElastiCache (Redis) with the AWS IoT
service by triggering an AWS Lambda function via the AWS IoT rules
engine. Depending on how the message-based application is architected
and how the data is leveraged, one solution may be a better fit than
the other.
Check out AWS AppSync for some of these realtime and offline features using different data sources, including databases search and compute.
AWS Amplify is AWS's modern answer to Firebase.
Fastest way to build mobile and web applications
AWS Amplify is a development platform for building secure, scalable
mobile and web applications. It makes it easy for you to authenticate
users, securely store data and user metadata, authorize selective
access to data, integrate machine learning, analyze application
metrics, and execute server-side code. Amplify covers the complete
mobile application development workflow from version control, code
testing, to production deployment, and it easily scales with your
business from thousands of users to tens of millions. The Amplify
libraries and CLI, part of the Amplify Framework, are open source and
offer a pluggable interface that enables you to customize and create
your own plugins.
Sounds like AWS Serverless is most suited alternative.
Also wondering: AWS vs Firebase - Is It Even a Fair Fight?
AWS Amplify. You can find more information here: AWS Amplify
You could consider using supabase.
It is opensource and can be installed onto ec2 / docker containers.
https://supabase.com/docs/guides/hosting/docker
I've found the hosted solution / free really poewrful to get up and running quickly. (yet to deploy to aws)
I know this is an old question, but nowadays AWS offers AppSync... a service that destroys Firebase RDB in every aspect