Connecting on prem MQ to Google Cloud Platform - amazon-web-services

This is more of a conceptual question as there is no relevant documentation available. We have an on prem IBM-MQ from which we need to transfer data on our cloud storage bucket (GCP/AWS), what could be possible solutions in this case? Any help or direction would be appreciated. Thank you!

I'm assuming you can reach your goal once the MQ-data has been changed/converted to supported format by the Big Query.
You can refer on this google documentation for full guide on Loading data from local files. You can upload file via GCP Console or using selected programming language that will match on your on-prem. There's also variety of uploads that you can choose from according to data file. This also includes the right permission to use the BigQuery.
If you require authentication you check on this Big Query Authentication Guide

Related

Has anyone setup SFTP with Google Cloud platform to trigger cloud function?

So I was able to setup SFTP so that I'm able to send a file to my VM instance on GCP.
However, does anyone have any guidance on how to then have a cloud function run at time of file being uploaded?
The objective is with our customers they send booking requests via SFTP in a certain EDI format.
Would need a cloud function to then run off of what was sent.
Any direction would be greatly appreciated 🙏
As an idea to think about - you might like to implement a SFTP server on a VM in a such manner that it uses Storage Buckets for file management behind the externally exposed standard API. In that case you may be able to use storage events to trigger cloud functions.
I think such implementations do exist, so it should be possible to find some examples.
So I could never figure out a simple, clean way of doing it on GCP. I did find this new platform called stedi. I was able to resolve my needs by it.
DISCLAIMER: I am not affiliate with stedi in anyway. I just want others to know that its out there cause could save some developers some major headaches

Why does my AWS AppFlow setup not show any subobject for selection?

I have just been given admin access to a Google Analytics portal that tracks the corporate website's activity. The tracked data are to be moved to Amazon S3 via AppFlow.
I followed official AWS documentation in how to setup the connection between GA and AWS. We have created the connection successfully but I came across an issue I can't find an answer to:
Subobject field is empty. Currently, there are already ~4 months worth of data so I was thinking it's not an empty data thing. This issue does not allow me to proceed creating the flow as it is a required field. Any thoughts?
note: the client and the team is new to AWS, so we are setting it up as we go, learning on the way. thank you for the help!
Found the answer! The Google analytics account should have a Universal Analytics property available. Here are a few links:
https://docs.aws.amazon.com/appflow/latest/userguide/google-analytics.html
https://support.google.com/analytics/answer/6370521?hl=en

Question about high level architecture required to process and visualize fitness app data (From Apple Health for example) using google cloud services?

I'm working on a project where I am tasked to use google cloud services to process and visualize fitness data. For example, I have exported some apple health data from my watch, and it is in .xml format. From a high level, I envision this .xml file starting off in object storage, and being converted to .csv through a cloud function (triggered by the creation of the .xml object in storage) and stored again in object storage (different bucket). Then I see these .csv files being processed by a DataFlow pipeline, which will reformat the data to the template schema that I would like the data to be organized with. This pipeline will output the resultant .csv to BigQuery, which will then be designated as a data source for Data Studio. I will then configure Data Studio to produce some simple reports that compare the health data to recommended values. I would like for this report to be accessible as a .pdf in object storage potentially as well. Am I on the right track, or am I missing some key services to accomplish this?
Also, I'm new to posting on StackOverflow, so if this question is against the rules or not welcome, please let me know.
Any feedback is greatly appreciated, as I have not been able to bounce these ideas off of other experienced cloud architects/developers.
This question is currently off-topics by the rule of StackOverflow, as it does not contain any problems to resolve. See point 4-5.
As a high-level advice, I do not see why it should not be possible based on the services you mentioned but you would need to implement it and try it on your side and evaluate the features of each service in your workflow.
In terms of solution or architecture advice, those are generally paid services and you would most likely find little help here for those unless you have a specific problem to solve with said services. You might find some help on the internet as well. ie.Cloud Solutions, Built it on GCP, etc
You might find this interesting to review as well as it mimics your solution. Hope this helps.

Does google store the requests that are sent via Google DLP API

I am trying to understand if Google stores text or data that are sent to DLP API? For example, I am having some data (text files) locally and I am planning to use google DLP to help identify sensitive information and maybe transform those back.
Would Google store the text files data that I am using? In other words, would it retain a copy of the files that I am sending? I am trying to read through the security and compliance page, but there is nothing that I could find that clearly explains this.
Could anyone please advise?
Here is what I was looking at https://cloud.google.com/dlp/data-security
Google DLP API only classifies and identifies the kind of data, mostly sensitive, we want to analyse and Google doesn't store the data we send.
We certainly don't store the data being scanned with the *Content api methods beyond what is needed to process it and return a response to you.

Are there monitor tools for AWS S3 and CloudFront

I am using the amazon services S3 and CloudFront for a web application and I would like to have various statistics about accessing the data that I am providing through the logs of those services (there is logging activated in both services).
I did a bit of googling and the only thing I could find is how to manage my S3 storage. I also noticed that newrelic offers monitoring for many amazon services but not for those 2.
Is there something that you use? A service that could read my logs periodically and provide me with some nice analytics that would make developers and managers happy?
e.g.
I am trying to avoid writing my own log parsers.
I believe Piwik supports the Amazon S3 log format. Take a look at their demo site to see some example reports.
Well, this may not be what you expect but I use qloudstat for my cloudfront distributions.
The $5 plan covers my needs thats less than a burrito here where I live.
Best regards.
Well, we have a SaaS product Cloudlytics which offers you many reports including, Geo, IP tracking, SPAM, CloudFront cost analysis. You can try it for free for upto 25 MB of logs.
I might be answering this very late. But I have worked on a golang library that can run analysis of CDN and S3 usages and store them in a backend of your choice varying from influxdb, MongoDB or Cassandra for later time series evaluations. The project is hosted at http://github.com/meson10/cdnlysis
See if this fits.
Popular 3rd party analytics packages include S3stat, Cloudlytics and Qloudstat. They all run around $10/month for low traffic sites.
Several stand-alone analytics packages support Amazon's logfile format if you want to download logs each night and feed them in directly. Others might need pre-processing to transform to Combined Logfile Format (CLF) first.
I've written about how to do that here:
https://www.expatsoftware.com/articles/2007/11/roll-your-own-web-stats-for-amazon-s3.html