I can create a group on the Amazon WorkMail console and add internal users to the group. However, I do not see an option to add external email addresses with different domains.
Any tips on how to do this? Should I just create an email forwarding rule on an internal user?
Unfortunately, at present, this isn't possible (more info below). So, your proposed solution of creating an email forwarding rule is likely the best solution.
One thing to note is that you may want to set up email redirect rather than email forwarding. If you redirect, the end recipient will see the email as coming from the original sender and addressed to the original recipient, as opposed to being forwarded. In WorkMail, you'll find redirect as one of the options, alongside forwarding, when setting up the rule(s).
One additional tip for setting up the redirect/forwarding rule. There's no condition in the rules setup to just forward everything. Because of that, you'll likely want to create 2 rules. The first rule would use the condition has my name in the To box and the second rule would use the condition Does not have my name in the To box. This should have you covered for just redirecting/forwarding everything. Fortunately, you can redirect to multiple destinations, so you can probably get away with just these 2 rules even if you have multiple final destinations.
Additional info about not being able to add external addresses to groups:
An AWS team member has stated that this isn't currently supported in this AWS forum post from 2017:
Indeed, it not possible to add an external email address to a group. I will forward this feature request to the service team.
A possible workaround is to create a redirect rule that redirect emails sent to this group to the external users.
As of today, you still get the following message when adding members to a group (console screenshot):
You can only add users and groups that are enabled for access to Amazon WorkMail.
[Edit, 2021-01-17: Tips on bulk addition of email addresses to a rule]
Bulk addition of email addresses to a rule
Unfortunately, there don't currently appear to be any APIs to create inbox rules programmatically. However, you can copy-paste a large number of emails into a rule.
First, you'll want to export the set of email addresses you have into e.g. a CSV file.
Then, you'll want to append a ; to each email address. The reason for this is that the character is recognized as a separator. If we don't use it, the email rule will interpret the entire pasted text as a single email address. As an example, in Google Sheets, this can be done with CONCATENATE. E.g. =concatenate(A1,";"). Here's a screenshot:
Next, open the email rule, copy the email addresses from the spreadsheet, paste them into the recipients box, and click the To button. In this case, it's 100 emails, so it takes a little bit of time to load:
Once it finishes loading, you'll see checkboxes next to the email addresses, like when you add single email addresses manually.
Make sure to click Ok on the recipients dialog, then click Ok on the rule dialog, and finally click Save changes on the Email Rules Settings panel to ensure everything gets saved.
Related
Is there an API that lets an application send invitations and requests to join a group?
I have checked the Google Directory API at https://developers.google.com/admin-sdk/directory/v1/reference/, but all I can find is the members API that lets an application directly add members.
What I am looking for is :
- to send a request to join a group,
- to list, accept or reject such requests,
- to send an invitation to join a group,
- to list, accept or reject such invitation.
I had no luck checking the reference, a google search and a search on stack overflow also turned out nothing. Does anyone know if such an API even exist, and if so, where can I find the documentation?
Currently there seems to be no ad-hoc API method for that. The currently supported group operations can be found in Directory API: Group Members namely: add member, update group membership, retrieve a group member, retrieve all group member and delete member. You'd have to implement the other functionalities you mentioned.
I know there have been similar questions in the past but I have tried many solutions given online to no avail. I am just not able to hide internal traffic for Google Analytics on my Django site.
I am setting the filter from Admin->View->Filters. Have tried Predefined and Custom both with fixed IP as well as a regex pattern. (Yes, I have double checked my IP from whatismyip.com and I am using the right one)
I read somewhere that it takes time for the filters to come into effect, so even waited for 24 hours but I still see a lot of internal traffic.
Google Tag Assistant is also tracking the pages when I access them from internal IP (not sure if its supposed to know about the filters)
Not sure where could I be going wrong.
(I am using reverse proxy but hopefully that wouldn't change anything since the google analytics code is run on the client side)
Do not use any filter on the default view (called 'All Website Data'). Create a separate view and then create a filter on it. That will work.
(After struggling with it for a few days, this response helped me with the above fix)
I struggled with this as well, so here is what I found out.
Note that real time reporting can take up to 2hrs to catch up to and reflect analytics configuration changes such as the addition of filters.
Possible solutions
1) As suggested in the other answer, leave the default view as default and create an additional view for the filters:
The default view collects
all traffic. You need to create a new view for which you can apply
your filter. Check out item 3 here
https://support.google.com/analytics/answer/1009618?hl=en
How to add
a new view: https://support.google.com/analytics/answer/1009714?hl=en
2) Filter IP v6, not v4:
Exclude the ipv6 address as mentioned in above post. This is the one
that "what is my ip address" returns. It's not the ipv4 syntax
(xxx.xxx.xxx.xxx) However, I have noticed that wired machines that
stay connected seem to keep the same ipv6 IP (the 31 digit sequence),
however wireless accounts (mobile phones, tablets) tend to be dynamic.
However, as posted above if you use just the first 15 digits of the
sequence and use the "begins with" filter type, it will block
the devices using the same shared router (ie. internet router in your
home)
About filtering only the first 15 digits:
I think it is meant to filter the first four blocks, so if your IPv6 looks like 2601:191:c001:2f9:5c5a:1c20:61b6:675a, then filter IP that begins with 2601:191:c001:2f9:.
Information found here.
What I wanted to achieve is pretty simple, if you send a request to some address, the response you get is a single integer number, like 13 for example. I think it is equivalent to hosting a .html page with single number on that page and then I can parse that string in my application. (It is a Unity game, using the WWW class to send the request.)
(This is actually a version number. If it is greater than what I stored in my app I would update it and then send another request to other place and retrieve something bigger)
I am looking for the cheapest way that can handle this. I planned to use AWS but confused what component should be use? S3? EC2? Lambda? CloudFront?
If you think doing this on a web hosting or Heroku or something else is better, I also wanted to hear about it.
To serve up a simple value, S3 should do the trick.
Create a bucket in the console, using lonely lowercase letters, digits, and dashes in the name. The name has to be globally unique among all of S3, so make up something unique. We'll call the bucket name example-bucket.
Create your file on your computer with the desired contents. If plain text, call it version.txt.
In the AWS console, select the bucket, and upload the file. While clicking through the "next" screens, put a check next to "make everything public" and accept the defaults. Upload the file.
Now, go to https://example-bucket.s3.amazonaws.com/version.txt in your browser and verify (using your actual bucket name. That's your download link.
Done. As long as you don't expect to handle over about 800 requests per second, this will do exactly what you want.
Review the S3 pricing, of course.
Although this question is suitable for Server Fault,
EC2 using nginx or apache web server will be sufficient.
Put Load balancer in front of EC2 instances.
I'm new on vsphere and I have an important question.
Is there a section where I can set parameters that allow a vsphere to send me notifications/email when a condition happens?
For example when a virtual machine cpu usage go over a value that I have set as 'alarm value' or when a virtual machine disk space usage go over a value vsphere send an email/notification that inform me.
I try to navigate over the menu but I don't find anything like this.
can I use an external app?
Haven't done it myself but according to the documentation:
In the Actions tab of the Alarm Settings dialog box, click Add to
add an action.
In the Actions column, select Send a notification email from the drop-down menu.
In the Configuration column, enter recipient addresses. Use commas to separate multiple addresses.
If you're not familiar with alarms at all you may want to take a look at their Alarm Example.
I am working with a team that is using S3 to host content and they moved from a single bucket for all brands to one bucket for each brand and now we are having trouble when linking to the content from within salesforce site.com page. When I copy the link from S3 as HTTPS, I get a >"Your connection is >not private, Attackers might be trying to steal your information from >spiritxpress.s3.varsity.s3.amazonaws.com (for example, passwords, messages, or credit cards)."
I have asked them to compare the settings from the one that is working, and I don't have access to dig into it myself, and we are pretty new to this as well so thought I would see if there were any known paths to walk down. The ID and Key have not changed and I can access the content via CyberDuck, it just is not loading when reached via a link.
Let me know if additional information is needed and I will provide as quickly as I can.
[EDIT] the bucket naming convention they are using is all lowercase and meets convention guidelines as well, but it seems strange to me they way it is structured as they have named the bucket "brandname.s3.companyname" and when copying the link it comes across as "https://brandname.s3.company.s3.amazonaws.com/directory/filename" where the other bucket was being rendered as "https://s3.amazonaws.com/bucketname/......
Whoever made this change has failed to account for the way wildcard certificates work in HTTPS.
Requests to S3 using HTTPS are greeted with a certificate identifying itself as "*.s3[-region].amazonaws.com" and in order for the browser to consider this to be valid when compared to the link you're hitting, there cannot be any dots in the part of the hostname that matches the * offered by the cert. Bucket names with dots are valid, but they cannot be used on the left side of "s3[-region].amazonaws.com" in the hostname unless you are willing and able to accept a certificate that is deemed invalid... they can only be used as the first element of the path.
The only way to make dotted bucket names and S3 native wildcard SSL to work together is the other format: https://s3[-region].amazonaws.com/example.dotted.bucket.name/....
If your bucket isn't in us-standard, you likely need to use the region in the hostname, so that the request goes to the correct endpoint, e.g. https://s3-us-west-2.amazonaws.com/example.dotted.bucket.name/path... for a bucket in us-west-2 (Oregon). Otherwise S3 may return an error telling you that you need to use a different endpoint (and the endpoint they provide in the error message will be valid, but probably not the one you're wanting for SSL).
This is a limitation on how SSL certificates work, not a limitation in S3.
Okay, it appears it did boil down to some permissions that were missed and we were able to get the file to display as expected. Other issues are present, but the present one is resolved so marking as answered.