I'm developing a dialogflow agent for bookings. My problem is that I need to deploy the agent for multiple clients with their own calendars. Unfortunately on the Google Cloud Platform is possible to have just one agent per project but at the same time the number of project is limited. How can i solve this? I may have 3 solutions but I'm open to suggestions.
Ask more projects to Google and associate each project to each of my clients. I will be able to manage the projects with a service account. But how much will it cost? May I request like more than 1000 projects?
Create a new Google Cloud Platform account for every client and create a project for each account (Like the qwicklabs account in the google courses). The problem is that I don't know how to scale this solution since I'd need to automate this process and i don't want to create an account manually each time.
Use the same GCP account and the same agent for multiple clients. This may require to insert a unique code when starting the chat to identify to which calendar we are referring. In this way though I won't be able to integrate the chat on the client's website or facebook page unless I don't give the same credentials to everyone.
What do you think could be the best solution? Do you have any other ideas to solve this problem?
Thank you guys
In terms of the best solution, it would best to create a project for each client. As for when using dialogflow products, Each project can have at most one agent, so you need multiple projects if you need multiple agents either way.
Additionally, when it comes to the amount of projects you can have in GCP, the limit for the average user is 30 projects. However, you can always increase the amount of projects by requesting a higher limit. You can do so by referencing this document here.
Related
I have a Saas billing-model and each user has their own GCP Project. This is similar to this reddit thread, which asks:
I’m thinking about selling a saas service. I’ve decided every customer will get their own gcp project every customer will have a bunch of cloud run services, a cloud sql database and some users in Identity platform. I know the default project limit is around 12 and it can be increased by filling a form.
This works for something like BigQuery, where each user's Dataset or Table will be created within their own GCP project, and thus their billing (and data) will be segmented under their project.
However, I also have some shared endpoints on Google Cloud Functions, for example let's say I have general/shared endpoints to do something like "export data". Now of course the query to grab the data will hit the correct GCP project, but if the export (or some other data processing task) is doing something that is very expensive -- some exports might take over an hour to write the data, if dealing with billions of rows, what would be the suggested way to set that up so the end user is paying for their computation, since I imagine an endpoint such as www.example.com/api/export is just going to be on the main Project account, and we wouldn't have, for example, 1000 different cloud functions that do the same thing just to have each one under their respective project.
What might be a solution to this? In a way I'm looking for something like this I suppose where the requestor pays.
You would probably need to record how long each function call took, and save that data somewhere before exiting the shared function.
The only alternative would be to split the function for each client, and use billing labels to help with allocation.
I have a product for which I would like to create a dashboard to show
its availability/uptime over time and display any outages.
Specifically I am looking for
ability to report historical information on service uptime
provide details on any service outages
The product is running on a fleet of linux servers and connects to a DB running
on a separate instance, also we have some dedicated instances that run nightly
batch jobs. My system also relies on some external services to provide
additional functionality for select customers. There is redis cache also for
caching data for multiple customers.
We replicate all the above setup (application servers, DB, jobs servers, redis
cache etc) into dedicated clusters for large customers. Small customers are put
on one of the shared clusters to keep costs low.
Currently we are running health checks on application servers only and providing
that information in a simple HTML page. This is a go to page for end-users/customers
and support teams.
Since the product is constructed using multiple systems/services our current HTML
page often times says that the system is up and running fine while can be experiencing
issues with some of its components or external services.
Current health check is using a simple HTTP request and looks for a 200
status code, this check runs every minute and we plot this data into a simple
chart to show last 30 days. We also show a list of outages with timestamp and
additional static information that is manually added.
We would like to build a more robust solution that monitors much more than the HTTP port
and where we have more details like what part
of the system is having issues and how those issues are impacting the system and
which customers are impacted.
Appreciate any guidance or help. We prefer to build the solution using
open source tools since we dont have much budget. Goal is to improve things for
my team members who are already overloaded.
I'm not sure if this will be overkill or not for your setup, given that I don't know your product, but have a look at the ELK Stack and see if you can use some components or at least some ideas from there:
What is the ELK Stack?
The Complete Guide to the ELK Stack
I'm looking to get help on the GCP billing. I know we can get cost info based on the service and project, however, is it possible to get info based on the access email ID? because I'm planning to give access to my colleagues and I want to know how much each one their access cost and against which service.
Something like: Date, Email ID, Service, Cost
With respect to another project, how should we know which access cost us so much?
We are running ~30 sandbox projects internally, each allocated to a specific person that can test and run his/her stuff on GCP.
I strongly suggest you create isolated workspaces (projects) for your colleagues so they don't accidentally delete/update services of other people. You will get a separate billing report for each project as well.
I am also setting up a billing alert for all my colleagues so they get an early notification if they left something running on their testbench.
There are three ways I think you could do that kind of cost segregation, I will number them in order of complexity.
1.- Cloud Export Billing, For this one the best practice is to segregate your resources and users by "Labels", as administrator, you may ask the users to use them and assign them to any resource they create, e.g. If they create a new VM instance, then you will be able to filter by field the exported table and create the reports as you want.(Also your GCP billing dashboard will show these "labels" segregations)
2.- Use Billing API to curl directly the information you need to get from it,you can manage to use in the request the information you need like SKU, User, Date and description.
3.- Usage Reports. This solution is more GSuite scope,and I can't vouch that will work as the documentation say but you can take a look to it, there is an option to get "Usage reports", this usage reports can be made from GSuite to any resource below, GCP included if you already have an organization.
Is it possible to open a website,like facebook.com for example, on an amazon web service?
My objective is to automate a certain task in a game and to do so without having to be online on my computer. The point is to spend less time on that game, but to not be left behind on the progress. (I'm building a bot to automate the daily tasks there, just need to know if i can now leave everything running on amazon)
Another project i want to do is to automate access to my email account and perform certain tasks depending on the emails i receive.
You get the point, i tried searching on google but i only find results about creating or hosting your own website in there and not about accessing existing websites and using automation in them.
It sounds like what you want is a virtual private server - basically a computer in the cloud that you control and is always on.
AWS have a service called LightSail for this kind of purpose. Under the hood lightsail just uses EC2, but lightsail takes away a lot of the options and configuration to provide a simpler 'click and go' kind of service.
Once you have a server you can schedule regular tasks. Depending on the complexity of your needs, you could look at using Cron as a scheduler and curl for you http requests.
For the specifics of any project you have I would suggest opening a new question with details of what you are trying to do, the reading you have done, and examples of any code you have tried.
Nowadays a lot of web applications are providing API for other applications to use.
I am new to the usage of API so I want to understand the use cases for it.
Lets take Basecamp as an example.
What are the use cases for using their API in my web application?
For inserting current data in my web application into a newly created Basecamp account instead of inserting everything manually which could take days or weeks if the data is huge?
For updating my application data when the user changes something in Basecamp. If so, how do I know for example when a user add/edit/remove a contact in Basecamp. Do I make a request and check every minute from the backend?
For making backup of the Basecamp data so I can move it to other applications if necessary?
Are all the above examples good use cases for the usage of API?
Are there more use cases?
I want to have a clear picture of why it's good to use another web service API and how I can leverage that on my application.
Thanks.
I've found the biggest reason to use and provide web services is to be able to programmatically drive the application with another process. This allows the coupling of different actions in different applications driven by one event/process/trigger.
For example I could create a use a webservice provided by Basecamp, my bug tracking database and the continuous integration server. I could tie all those things together and kick them off from a commit hook script.
I can have a monitor in production automatically open a ticket in our ticket tracker. This could trigger an autoremediation process from the ticket tracker which logs into the box remotely and restarts the service.
The other major reason I've seen to use and provide web service is to reduce double entry. If you do change management in your production environment that usually means you create Change tickets. The changes that occur may also need to be reflected in the Change Management Database which is usually a model of how production is suppose to look. Most of these systems don't automatically drive the update of your configuration item with the data from the change. Using web services you can stitch them together to eliminate the double (manual) entry that would normally occur.
APIs are used any time you want to get data to/from an application without using the default interface.
*I'd bet there's a mobile app would use the basecamp api.
*You could use the api to pull information from basecamp into another application (like project manager software or an individual's todo webpage)
*the geekiest of us may prefer to update basecamp from a script/command line rather than interrupting our work flow to open a web page and click around.