Amazon Appflow always load same data from a slack channel - amazon-web-services

I wanted to archive a slack channel conversation with a free-tier slack workspace. I found Amazon Appflow and it seemed a quite proper approach to me.
I finished integrating my slack app with Appflow and set up a single channel to my S3 bucket. But my Appflow project only fetched the oldest conversations, about 150 or so.
I thought that if I activated it multiple times (with enough time) Appflow would crawl data from oldest to latest in order, but it didn't work like that. It keeps getting the same data.
(Slack free-tier doesn't show all of the conversations. When I say 'oldest' data, it means the conversions created several months ago)
Am I missing something in official docs?

Related

AWS Appflow just ingests few data from Google Analytics

I want to use Appflow to ingest data from Google Analytics, that track and store User's information and interaction data of some Website. I have never worked with Appflow or Google Analytics before, I found that Appflow worked quite weird and I don't know why, hope that somebody can give me some help or explanation:
On the Google Analytics Report Console, I can see that we have much customer data ~ 20K customer (users)
On the Appflow side, when I create flow, I selected some Object (or Field) from Source (GA) to ingest to S3, with ON-DEMAND trigger, WITHOUT any filter ,what i found is that:
If I select 3 fields: ga:userAgeBacket, ga:userGender, ga:interestOtherCatgory, the ouput on S3 Bucket ends up with ~1000 rows
If I select 3 fields above and add some addition fields like: ga:date, ga:day, ga:year, the output on S3 Bucket ends up with less data ~ 100 record
I have try a few times (with Schedule trigger and Full Load mode also) and the the result is still the same and I don't know whether in the first scenario, Appflow ingest all data or just ingest some of my data from GA either?
Thank you very much!

Where does AWS Amplify DataStore store its data?

I have been looking for a Firebase Firestore equivalence to AWS World, I came across AWS Amplify and saw the DataStore module. From watching this video I see it does pretty much exactly what I expected, that is being about to have a real-time data mechanism, I watched this video from the official Amplify website but I haven't really seen where the data get a store or is this just a pub-sub kind of mechanism where I need to additionally write code which writes data to where I want it to be store e.g to DynamoDb or any other destination on Cloud?

Migrating Google Analytics complete raw session data (all 290 fields) to an AWS s3 Bucket

Any idea/related resource on how to migrate Google Analytics raw session/hit level data, then dump in an AWS s3 bucket?
I have tried a couple of technologies, but they are limited. AWS AppFlow for instance is limited to only 9 fields to migrate. The Google Analytics API V4 as well is limited to just custom dimension/metrics, thus cannot export the whole data in its raw format.
Feedback appreciated
Thanks
You have to buy Google Analytics 360 (~150k/year) then connect the view you want to BigQuery. So it will provide a historical export of the smaller of 10 billion hits or 13 months of data within 4 weeks after the integration is complete (https://support.google.com/analytics/answer/3416092?hl=en).
There are no other solutions to have the complete raw data.

How long are metrics retained for SNS in CloudWatch?

I can't seem to find this info anywhere.
Specifically, I'm looking at metrics like NumberOfMessagesPublished and NumberOfNotificationsDelivered. How far back does CloudWatch retain this data?
2 weeks according to Monitoring Your Instances Using CloudWatch
These statistics are recorded for a period of two weeks, so that you
can access historical information and gain a better perspective on how
your web application or service is performing.
If you want to archive metrics beyond 2 weeks
If you want to archive metrics beyond 2 weeks you can do so by calling
mon-get-stats command from the command line and storing the results in
Amazon S3 or Amazon SimpleDB.

synch gs bucket with s3 bucket (lambda style)

Simple problem, i have got a google bucket which gets content 3 times a day from an external provider. I want to fetch this content as soon as it arrives and push it onto a S3 bucket. I have been able to achieve this via running my python scripts as a cron job. But I have to provide high availability and such if i follow this route.
My idea was to set this up in aws lambda, so I don't have to sweat the infrastructure limitations. Any pointers on this marriage between gs and lambda. I am not a native Node speaker so any pointers will be really helpful.
GCS can send object notifications when an object is created/updated. You can catch the notifications (which are HTTP post requests) by a simple web app hosted on GAE, and then handle the file transfer to S3. Highly available, event driven solution.