I have setup my website with Amazon EC2 and Route53 successfully. Our company decided to go with Microsoft Exchange for their email.
Microsoft online told me to add a TXT and MX record in the DNS registry. Works great! Can now send emails. However when people sometimes email me from a Google account (known, there could be other servies I don't know about) They receive an error like this:
This is an automatically generated Delivery Status Notification
THIS IS A WARNING MESSAGE ONLY.
YOU DO NOT NEED TO RESEND YOUR MESSAGE.
Delivery to the following recipient has been delayed:
[[EMAIL_ADDRESS]]
Message will be retried for 1 more day(s)
Technical details of temporary failure:
DNS Error: Domain name not found
Do you know how I can fix this. My web address is visible and working and some people can send me emails fine but others get this.
UPDATE: DNS RECORDS
+----------------------+-----+---------------------------------------------------------------------------------------------+--------+
| charterbox.com.au. | A | 54.252.90.224 | 3600 |
| charterbox.com.au. | MX | 0 ms36130019.msv1.invalid.outlook.com | 3600 |
| charterbox.com.au. | NS | ns-1099.awsdns-09.org. ns-138.awsdns-17.com. ns-830.awsdns-39.net. ns-1960.awsdns-53.co.uk. | 172800 |
| charterbox.com.au. | SOA | ns-1099.awsdns-09.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 | 900 |
| charterbox.com.au. | TXT | "MS=ms36130019" | 3600 |
| *.charterbox.com.au. | A | 54.252.90.224 | 300 |
+----------------------+-----+---------------------------------------------------------------------------------------------+--------+
The MX record you have in place looks like the record used to verify your domain. You will need to change it to the production mx records. You can find the specific value(s) for your domain in the domain administration section of the admin page at portal.microsoftonline.com. Once you update the MX records, you should be good to go.
Related
Could you check why user kcizek is not able to login to either hub.tess.io nor ecr? This is first time login but it should work with corp and PIN + Yubi. Login works for me just fine.
I am unable to access https://ecr.vip.ebayc3.com/repository/
When I log in, I am faced with this. The ‘contact us’ link doesn’t have any contact information, so I’m trying here. Any ideas? Thanks.
Potentially relevant background: this is my first time attempting to get access.
enter image description here
It's found the email info is absent in user account.
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| email | |
| enabled | True |
| id | e69fe5b9d9384b338b3c397c7c84e33f |
| name | kcizek |
+-----------+----------------------------------+
Solution is to contact Tess oncall to add email info.
Related Command
openstack user set kcizek --email kcizek#ebay.com
I use a can-discovery tool which gives me an output like this one below:
-------------------
CARING CARIBOU v0.3
-------------------
Loaded module 'uds'
Sending Diagnostic Session Control to 0x0710
Verifying potential response from 0x0710
Resending 0x710... Success
Found diagnostics server listening at 0x0710, response at 0x077a
Sending Diagnostic Session Control to 0x07df
Verifying potential response from 0x07df
Resending 0x7df... Success
Found diagnostics server listening at 0x07df, response at 0x077a
Sending Diagnostic Session Control to 0x07e0
Verifying potential response from 0x07e0
Resending 0x7e0... Success
Found diagnostics server listening at 0x07e0, response at 0x077a
Sending Diagnostic Session Control to 0x07ff
Identified diagnostics:
+------------+------------+
| CLIENT ID | SERVER ID |
+------------+------------+
| 0x00000710 | 0x0000077a |
| 0x000007df | 0x0000077a |
| 0x000007e0 | 0x0000077a |
+------------+------------+
Now I want to process the results in my Java Script application and save it to my database as Client and Server IDs. But to do this I need to parse this output above to get just the IDs as client and server IDs.
The first step now would be to get the Hex IDs fromt he output and specify if Client or Server.
The problem is I dont know grep/awk/sed good enough to think of any bash command to solve my problem.
Would be great if somebody could help me a little bit out :)
I tried all sorts of grep commands like
grep 0x000 disc_log_temp.txt
, also with different options like -w -x -o ect.
With this commands I get outputs like this one:
Sending Diagnostic Session Control to 0x0710
| 0x00000710 | 0x0000077a |
| 0x000007df | 0x0000077a |
| 0x000007e0 | 0x0000077a |
But I need just the single ID strings and also dont understand why my grep displays the first line....
Now if somebody could tell my how I can manage to get just the IDs and also know which ones are Client and which ones are Server IDs it would be really great.
Thank you very much in advance.
If you use awk, the first field of the line you want is "|" and the number of fields is 5. I'm not sure how you want it formatted, it should be easy to modify though.
awk '$1=="|"&&NF==5{print "client: " $2 ", server: " $4}' disc_log_temp.txt
I am trying to upload around 10 GB file from my local machine to S3 (inside a camel route). Although file gets uploaded in around 3-4 minutes, but it also throwing following exception:
2014-06-26 13:53:33,417 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Download complete to local. Pushing file to S3
2014-06-26 13:54:19,465 | INFO | manager-worker-6 | AmazonHttpClient | 144 - org.apache.servicemix.bundles.aws-java-sdk - 1.5.1.1 | Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)[141:org.apache.httpcomponents.httpcore:4.2.4]
.......
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_55]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
2014-06-26 13:55:08,991 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Upload complete.
Due to which camel route doesn't stop and it is continuously throwing InterruptedException:
2014-06-26 13:55:11,182 | INFO | ads.com/outbound | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Disconnecting from cxportal.integralads.com port 22
2014-06-26 13:55:11,183 | INFO | lads.com session | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Caught an exception, leaving main loop due to Socket closed
2014-06-26 13:55:11,183 | WARN | lads.com session | eventadmin | 139 - org.apache.felix.eventadmin - 1.3.2 | EventAdmin: Exception: java.lang.InterruptedException
java.lang.InterruptedException
at EDU.oswego.cs.dl.util.concurrent.LinkedQueue.offer(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor.execute(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.DefaultThreadPool.executeTask(DefaultThreadPool.java:101)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.AsyncDeliverTasks.execute(AsyncDeliverTasks.java:105)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.handler.EventAdminImpl.postEvent(EventAdminImpl.java:100)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.adapter.LogEventAdapter$1.logged(LogEventAdapter.java:281)[139:org.apache.felix.eventadmin:1.3.2]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fire(LogReaderServiceImpl.java:134)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fireEvent(LogReaderServiceImpl.java:126)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.handleEvents(PaxLoggingServiceImpl.java:180)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:145)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.apache.camel.component.file.remote.SftpOperations$JSchLogger.log(SftpOperations.java:359)[110:org.apache.camel.camel-ftp:2.12.1]
at com.jcraft.jsch.Session.run(Session.java:1621)[109:org.apache.servicemix.bundles.jsch:0.1.49.1]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
Please see my code below and let me know, where I am going wrong:
TransferManager tm = new TransferManager(
S3Client.getS3Client());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
Utils.getProperty(Constants.BUCKET),
getS3Key(file.getName()), file);
try {
upload.waitForCompletion();
logger.info("Upload complete.");
} catch (AmazonClientException amazonClientException) {
logger.warn("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
The stacktrace doesn't even have any reference to my code, hence couldn't determine where the issue is.
Any help or pointer would be really appreciated.
Thanks
I want to put a strict FAIL qualifier on all email sources that are not listed explicitly in the SPF record of my domain.
This can simply be accomplished by the following record (the -all designates that all other sources should not be accepted)
mydomain.com. IN TXT "v=spf1 ip4:my-ip-address/32 -all"
Now my problem is that I in addition want to white-list my email provider (mailgun.com) as well as google apps, so I created the following record:
mydomain.com. IN TXT "v=spf1 include:mailgun.org include:_spf.google.com ip4:my-ip-address/32 -all"
Now the SPF record of mailgun.com (in case of google the same situation applies) resolves to:
mailgun.org. 3600 IN TXT "v=spf1 ip4:173.193.210.32/27 ip4:50.23.218.192/27 ip4:174.37.226.64/27 ip4:208.43.239.136/30 ip4:50.23.215.176/30 ip4:184.173.105.0/24 ip4:184.173.153.0/24 ip4:209.61.151.0/24 ip4:166.78.68.0/22 ip4:198.61.254.0/23 ip4:192.237.158.0/23 " "~all"
Now what,s interesting is, is, that they put a soft fail qualifier "~all" on their spf record.
Wikipedia describes the include directive as follows:
If the included (a misnomer) policy passes the test this mechanism
matches. This is typically used to include policies of more than one
ISP.
I interpret this in the way that an unknown sender is qualified as SOFT FAIL by the included records, and therefore passes as SOFT FAIL, because they are included in the root record. Even if the root record puts a FAIL on all not included sources.
So that the incldued records effectively render the FAIL qualifier of the root record useless. So the laeast strict record deinfes the overall qualifier for unknown sources.
Am I correct in this assumption? If not, how is in the example given, an unknown sender qualified?
The behaviour is described in seciton 5.2 of the RFC where it says
Whether this mechanism matches, does not match, or throws an
exception depends on the result of the recursive evaluation of
check_host():
+---------------------------------+---------------------------------+
| A recursive check_host() result | Causes the "include" mechanism |
| of: | to: |
+---------------------------------+---------------------------------+
| Pass | match |
| | |
| Fail | not match |
| | |
| SoftFail | not match |
| | |
| Neutral | not match |
| | |
| TempError | throw TempError |
| | |
| PermError | throw PermError |
| | |
| None | throw PermError |
+---------------------------------+---------------------------------+
The mechanism in this contects refers to the "include" functionality.
As shown in the table a softfail causes a not-match.
It also says:
In hindsight, the name "include" was poorly chosen. Only the
evaluated result of the referenced SPF record is used, rather than
acting as if the referenced SPF record was literally included in the
first.
Which I interpret in the way that only the result of the included record is relevant, which is, in the cas of a soft fail, a not-match (same as if the record woul have a FAIL) qualifier.
Here's also a test result with the py spf library performed on this website
Input accepted, querying now...
Mail sent from this IP address: 1.2.3.4
Mail from (Sender): scknpbi#cacxjxv.com
Mail checked using this SPF policy: v=spf1 ip4:4.5.6.7/32 include:mailgun.org -all
Results - FAIL Message may be rejected
Mail sent from: 1.2.3.4
Mail Server HELO/EHLO identity: blanivzsrxvbla#saucjw.com
HELO/EHLO Results - none
Here is the documentation for the Activity data type.
However, I think I've seen 4 status codes for the responses:
'Successful'
'Cancelled'
'InProgress'
'PreInProgress'
Are there any others?
Looks like they have updated the documentation, in the same url you have shared:
Valid Values: WaitingForSpotInstanceRequestId | WaitingForSpotInstanceId | WaitingForInstanceId | PreInService | InProgress | Successful | Failed | Cancelled