For multiple external tenants accessing Kafka, is there any issue with providing the same endpoint (ie. set of brokers, ports) to the multiple producers ?
what are the best practices wrt multiple tenants producing data on (Confluent) Kafka topics, Kafka being installed on GCP.
tia!
Here are Google's Best practices for enterprise multi-tenancy
Confluent Cloud is a fully managed Apache Kafka as a Service in the public cloud. Looks like you can register here to download the white paper that contain a guide to Kafka best practices.
As well you may find this presentation interesting.
Apache Kafka is available throw Google Market Place.
Confluent is a Google partner and you can reach out to them here by clicking on the Contact partner button and filling the form.
Related
So I know this question probably gets asked a lot but this is kind of a specific one.
I am currently working as a webdev for my company and we are currently developing a webapp for our customers to use (energy management system and monitoring app).
My boss asked me to take on the DevOps role for this application (Which in this case is just basically setting up the aws services). The thing is I am pretty new to setting up AWS services and i am not sure where i should start. I am hoping some of the more experienced users could suggest a workflow after I describe what the app will be doing and the stacks we are using.
App/Product description:
Our app comes with an IoT module (ESP 32) that is placed into a customers Electrical Cabinet. We connect several devices to this module so we can read the incoming data.
The module sends this data to an MQTT protocol. After this we read the data from the MQTT protocol using telegraf. Telegraf sends this data to influxDB so we can use it in our application. All the devices (from each customer) data is saved to influxDB this way.
The application itself is written in NodeJS (backend) and VueJS/Nuxt (frontend). It's a simple dashboard application that shows the customer the specific data for his/her device. This authorization is done with Nuxt-auth on the frontend and Nest Js authentication on the backend. All the customer data is saved to a SQL database.
So this is a simple overview of the app. All technologies used are:
Telegraf
InfluxDB
NodeJS (NestJs)
VueJS (NuxtJs)
MySQL
IoT
MQTT protocol
What would be the best way to deploy this app using aws?
Currently our dev team was thinking of this:
AWS Fargate: Docking the application and using Fargate to deploy
AWS RDS: SQL database
InfluxDb
Telegraf
AWS IoT Core
The application should be online 24/7 and we are thinking our userbase will be around 500 users (recurring logins on the platform)
I have been looking for similar questions that match our needs but didn't really find any. If someone does find one please feel free to share them.
Should you need more info, I will be happy to supply more information.
I'm new to the GCP and currently have a microservice architecture using GKE and gRPC. The microservices are publishing events to Google Cloud Pub/Sub. My Web-UI is
using Google Cloud Endpoints to send requests to the Microservices.
I want to have a lot of live/push updates on the website (such as live updating user statistics etc) and now wonder how this is done best. Is it a bad practice to let the Web-UI subscribe to a topic in Google Cloud Pub/Sub? Are there other technologies in the GCP that may be better for this case?
Cloud Pub/Sub is intended for `torrents` use cases, where you have a lot of data communicated between relatively few publishers/subscribers.
Firebase Cloud Messaging might better fit the particular `trickles` use case you describe, where you are sending smaller updates to many, possibly transient, subscribers.
I am currently working on a web portal for a foundation. Applicants for a grant will receive access data in advance independently of this portal. New applications will then be created and processed in the portal itself. Once an application is complete, it is sent off. Later the application will be approved or rejected.
There are a number of technical specifications on which I have no influence. The frontend should be implemented using Html+Javascript. The backend should use the Amazon Web Services (AWS). If there is a need to program something for the backend - then C# should be used.
I know how to implement the classic client-server solution. At the moment, however, AWS offers me an unmanageable set of services. And here I'm hoping for suggestions as to which of the services I should take a closer look at. Ideally, no complete 'server solution' should run on a virtual server. Instead, Lambda functions are mentioned again and again. So would Amazon RDS and AWS Lambda be a sensible and sufficient combination? Did I miss something?
Thank you very much for your suggestions.
One solution would be to use AWS S3 to server HTML, CSS, JS, Images and other static content. You could use AWS Lambda via AWS API Gateway to serve as a backend. AWS Lambda would then connect to AWS RDS or AWS DynamoDB if you would prefer a NoSQL solution.
Image taken from AWS Github repo
You can get a more detailed description of how to set this up at
https://github.com/aws-samples/aws-serverless-workshops/tree/master/WebApplication/
Just curious to understand if there are any logical reasoning behind in naming AWS products and services. For example, it has been named as AWS Lambda and not Amazon Lambda & it is Amazon S3 and not AWS S3.
If you hover over the Products menu in AWS homepage, you can see list of all products and services at a glance prefixed with both 'Amazon' and 'AWS'.
Managed to find an answer on naming analogy for AWS products and services from another similar question posted here. Response provided by a Senior Technical Trainer working at Amazon Web Services.
The pattern is that utility services are prefixed with AWS, while
standalone services are prefixed by "Amazon".
Services prefixed with AWS typically use other services, for example:
• AWS Elastic Beanstalk, AWS OpsWorks and AWS CloudFormation launch
other services
• AWS Lambda is triggered by other services
• AWS Data Pipeline moves data between other services
• AWS CloudFormation launches
other services
The AWS documentation page is a great reference for
determining the official name of a service.
As far as I understand, the prefix AWS is used for PaaS ( Platform as a Service) and prefix Amazon is used for IaaS (Infrastructure as a Service). The term AWS(Amazon Web Service) is used whenever it is offered in terms of service/platform, where as Amazon is used whenever a hardware resource/infrastructure is provided.
For example: In the product page of AWS site, in compute category the Amazon EC2 is IaaS providing compute capacity where as AWS Elastic BeanStalk is PaaS which is a platform for deploying web services and web-apps/wesites, likewise AWS Lambda is PaaS for server-less computing which lets us run code without provisioning or managing servers. Similarly in Storage category Amazon S3 is an IaaS which provides storage capabilities where as AWS Snownball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud,which is kind of PaaS.
Although this is just a logical assumption, as we never really know about how Amazon has named it's products and services. So please forgive if there are difference of opinions regarding this.
In one of AWS Meetups it was told that Amazon itself uses few of its cloud services and these are named with 'Amazon' prefix.
I am not sure how much of this is true..
Web Service definition (wiki):
A web service (WS) is either:
a service offered by an electronic device to another electronic device, communicating with each other via the Internet, or
a server (n.e. an Operating System Service) running on a computer device, listening for requests at a particular port over a network, serving web documents (HTML, JSON, XML, images).
Context: Web Service, initially designed as a replacement for Remote Procedure Call (RPC) was a revolutionary idea during the Internet Boom based mainly on XML. Amazon's philosophy was to manage all the ERP and Customer request using IT (Web Services) instead of traditional paper based processes (or RPC or not automated tools). The same approach was then applied from books to compute resources (that's how S3 and EC2 products came to be).
Any service designed to be used by the customer mainly through an API (or Web Service - today it will be called API first product ) it is part AWS collection of services, and when the service is seen as a traditional product (like replacement of a service that you would install on your desktop or use it from Cloud, mainly through an UI) is part of Amazon collection of services. Today we can see exceptions to this rule. Initially this was the thought of Jeff Bezos. To understand more about his philosophy, read: The Secret of Amazon success internal APIs:
Think about what Bezos was asking! Every team within Amazon had to interact using Web Services.
Anyone who doesn’t do this will be fired. Thank you; have a nice day!
I have to build an online bookstore using AWS using SQS, SES and RDS services as homework but Im at a standstill. I read through the documentations about these services provided by Amazon but I cannot figure out how to make them communicate with each other and how to set up instances with the named services. SQS should be the backbone of this store. RDS should contain users and products in stock and SES is used to notification for the customer. I search google as thoroughly as I could but could not find anything related to my problem. If anyone could give me some pointers or lead me to some reading I may have missed I would be most grateful.
These services talk to each other, but they are functionally separate. You connect to and populate an RDS database the same way you'd connect to and populate any remote MySQL database. SQS and SES both are driven through the AWS API, which you tap into using the Amazon API tools:
http://aws.amazon.com/developertools?_encoding=UTF8&jiveRedirect=1
You just create your Amazon AWS account, get your access credentials, put them into the environment variables (read the READMEs in the tools downloads) and start using them.
hope that helps.