Retrieve/use G suite Default Routing rules programatically - google-admin-sdk

I am only looking for read-only access.
I'd like to develop either a small web app, or maybe a script embedded in Google Sheets, that allows my users can look up which Google Admin default routing rules they are involved in.
To do that, I'll need an API to go through the rules and tabulate the information in the way I need it.
Can I do that with Admin SDK, which is soon-to-be deprecated? Is there a replacement product that can do what I want?
More details:
I currently use default routing for a few purposes. I have about 15 rules, and each one changes the route of a simple Match Rule by adding extra recipients. Some of these are to catch emails sent to ex-employees.
Others are to handle certain general email addresses like sales#example.com. Rather than using a sales group, we have a sales user account. And rather than putting forwarding rules in that users' settings, we use Default routing.

I had a similar problem where I needed the routing rules.... a bit different case since I just wanted a one time access to see what was going on - not necessarily something for users. I could not find anywhere else that helped me even retrieve the rules (Other than open each one up individually). I ended up finding that I could just scrape the HTML of the routing rules page to a CSV and filter for lines with an '#' character. The rules have a bunch of t/f that presumably can be matched back to their function - I didn't need all that and didn't spend the time to figure it out. This probably doesn't help the for the original post case, but perhaps my finding can help the next person looking for a way to do this.

Related

Blocking based on full URL and not just the URI in AWS WAF

I am using AWS WAF across multiple CloudFront distributions which go to different URLs. Generally speaking, it is working well. However, we have noticed particular activity on a few of the underlying sites that I want to block, but I don't want to block it across all the sites.
It seemed simple enough to me to create a WAF rule that would match a regex on the URI and block based on that. However, it appears that AWS WAF does not use the host in its URI matching. For example this rule:
Inspect URI, Block based on RegEx with RegEx being:
^(http|https):\/\/(www)?\.?example\.net\/(.*)?\/*.html$
And these test URLs work in my regex tester:
http://example.net/blah.html
https://example.net/blah.html
http://www.example.net/blah.html
https://example.net/stuff/blah.html
When I apply it to the WAF, though, it does not block.
Is there something else I can do here to achieve what I am looking to do? I do not want to edit anything directly on my hosting servers because it would be more of a maintenance headache and it would not solve the problem I am attempting to solve (which is stop bots from spamming bad URLs and spiking my server with 404s).
I also realize someone may suggest I could do a rate limit - which I do have in place - but the bots are coming from many different IPs so that doesn't solve this particular case. Instead, I just want to block some of the URL types that they keep trying to get to. In this case, it's thousands and thousands of HTML pages. It also does not take into account that I only want to block these requests for a very specific site.

Google Analytics Referral Exclude Regex Partial Domain Name

I am attempting to filter out some of the nasty analytics referral traffic. It doesn't touch my site, so htaccess is out.
I have to specifically go into Google to create a filter. I have a few setup already, but am looking to try something new that will hopefully make my exclusion list a bit easier to manage.
I want to block any referral traffic coming from a domain that has seo, traffic, monitize, etc. in it. This would stop about 90% of the referral traffic and would keep excluding sites.
What I currently use is this:
(seomonitizer|trafficseo|seotraffic|trafficmonitizer)\.(com|org|net|рф|eu|co)
It removes each site one by one, but when a new site hits, I have to add it to the list.
I'm not sure what the regex capabilities and limitations are of the Analytics filters, but possibly this may be the foundation, I'm just not sure what goes into the middle.
((?=())\.(?=()))
Thanks
Unfortunately you will have to TO check and add each one of them to your list as they are appearing in your account. To answer your question I use as in the following example:
.*((darodar|priceg|buttons\-for(\-your)?\-website|makemoneyonline|blackhatworth|hulfingtonpost|o\-o\-6\-o\-o|(social|(simple|free|floating)\-share)\-buttons)\.com|econom\.co|ilovevitaly(\.co(m)?)|(ilovevitaly(\.ru))|(humanorightswatch|guardlink)\.org).*
I like to use .co(m)? instead of .com for example
Remember To avoid having ghost referrals currently there are 3 methods.
1) The first one (the one you are using) would be to create a filter that will blacklist all the bad traffic, but there is a limit for the amount of character you can use, so you might end up creating multiple similar filters to cover all the nasty analytics referral traffic. Here is a link with a complete list of bad bots.
2) the second method is to check the box "Exclude all hits from known bots and spiders" in your Google Analytics Account >Property >View
3) Create a hostname Filter following this article steps.

Is it feasible to have a single sign on page for multiple datasources?

I am in the beginning stages of planning a web application using ColdFusion and SQL Server 2012.
In researching the pros and cons of using multiple databases (one per customer) vs one large database, for my purposes I have decided multiple databases would be the best approach.
With this in mind I am now wondering the best way to proceed regarding logging clients in. I have two thoughts here:
I could use sub-domains with each one being for a specific client. The sub-domain also being the datasource name.
I could have a single sign on page with the datasource for this client stored in a universal users table.
I like the idea of option 2 best however I am wondering how this may work in the real world. Making each user unique would not be ideal (although I suppose I could make this off of an email address instead of a username).
I was thinking of maybe adding something along the lines of a "company code" that would need to be entered along with the username and password.
I feel like this may be asking too much of clients though.
With all of this said, would you advise going with option 1 or option 2? Would also love to hear any thoughts or ideas that may differ.
Thanks!
If you are expecting to have a large amount of data per client, it may be a good idea to split each client into their own database.
You can create a global database that contains client information, client datasource, settings, etc. for each client and then set the client database in the application.cfc.
This also makes it easier at the end if a client request their data or you would like to remove a client from the system.

How to check spammyness of a link/url

I know that most spams are related with one or more links, so I am wondering if there is any web service which can check the spam-weight/spammyness of a URL. Similar to how Akismet can check the spammyness of text content.
p.s. - I searched in google and couldn't find anything satisfactory :)
There are a number of different URI DNS-based Blackhole (or DNSBL) services available to the public for low volume lookups. Two of the most well-known are SURBL and URIBL. PhishTank (run by OpenDNS) is also worth a look as many of the URLs are categorized and classified along with being listed.

REST and web services - having trouble understanding them

Well, the title more or less says it all. I sort of understand what REST is - the use of existing HTTP procedures (POST, GET, etc.) to facilitate the creation/use of web services. I'm more confused on what defines what a web service is and how REST is actually used to make/expose a service.
For example, Twitter, from what I've read, is RESTful. What does that actually mean? How are the HTTP procedures invoked? When I write a tweet, how is REST involved, and how is it any different than simply using a server side language and storing that text data in a database or file?
This concept is also a bit vague to me but after looking at your question I decided to clarify it a bit more for myself.
please refer to this link in msdn and this.
Basically it seems that it is about using http methods (Get/Post/Delete) to identify the exposure of resources the application allows.
For example:
Say you have the URL :
http://Mysite.com/Videos/21
where 21 is an id of a video.
We can further define what methods are allowed to this url - GET for retrieval of the resource, POST for updating/ Creating, Delete for deleting.
Generally, it seems like an organized and clean way to expose your application resources and the operations that are supported on them with the use of http methods
You may want to start with this excellent introductionary writeup. It covers it all, from start to end.
A RESTful architecture provides a unique URL that defines each resource individually and infers through HTTP action verbs the appropriate action to do for the same URL.
The best way I can think to explain it is to think of each data model in your application as a unique resource, and REST helps route the requests without using a query string at the end of the url, e.g., instead of /posts/&q=1, you'd just use posts/1.
It's easier to understand through example. This would be the REST architecture enforce by Rails, but gives you a good idea of the context.
GET /tweets/1 ⇒ means that you want to get the tweet with the id of 1
GET /tweets/1/edit ⇒ means you want to go to the action edit that is associated with the tweet with an id of 1
PUT /tweets/1 ⇒ PUT says to update this tweet not fetch it
POST /tweets ⇒ POST says i got a new one, add it to the db, i cant give an id cuz i dont have one yet until i save it to the db
DELETE /tweets/1 ⇒ delete it from the DB
Resources are often nested though so in twitter it might be like
GET /users/1/jedschneider/1 ⇒ users have many tweets; get the user with id of jedschneider and its tweet id 1
The architecture for implementing REST will be unique to the application, with some apps supporting by default (like Rails).
You're struggling because there are two relatively different understandings of the term "REST". I've attempted to answer this earlier, but suffice to say: Twitter's API isn't RESTful in the strict sense, and neither is Facebook's.
sTodorov's answer shows the common misunderstanding that it's about using all four HTTP verbs, and assigning different URIs to resources (usually with documentation of what all the URIs are). So when Twitter is invoking REST they're merely doing just this, along with most other RESTful APIs.
But this so-called REST is no different than RPC, except that RPC (with IDLs or WSDLs) might introduce code generation facilities, at the cost of higher coupling.
REST is actually not RPC. It's an architecture for hypermedia based distributed systems, which might not fit the bill for everyone making an API. In the linked MSDN article, the hypermedia kicks in when they talk about <Bookmarks>http://contoso.com/bookmarkservice/skonnard</Bookmarks>, the section ends with this sentence:
These representations make is possible to navigate between different types of resources
which is the core principle that most RESTful APIs violate. The article doesn't state how to document a RESTful API and if it did so, it would be a lot clearer that clients would have to navigate links in order to do things (RESTful), and not be provided with a lot of URI templates (RPCish).