I have tried to create a user - admin at OTRS system (localhost) installed this way "Nice tutorial. Thanks!".
But I met the DNS problem, while the email was verified.
How do I solve it?
To solve this issue I went to
/opt/otrs/Kernel/System
There you can find the file CheckItem.pm
implement:
sudo nano CheckItem.pm
There I have modified the CheckEmail subroutine:
sub CheckEmail {
return 1;
...
}
Apparently you either can't resolve domains OR the email address you try to type in does not have a valid MX record on the DNS server your OTRS system uses.
You can change the DNS server by setting CheckMXRecord::Nameserver under Admin > SysConfig > Framework > Core to a valid nameserver.
Alternatively you can set CheckMXRecord to 'No' under Admin > SysConfig > Framework > Core if you do not want DNS lookup validation at all.
Related
I am setting up the custom MAIL FROM domain based on this link: https://docs.aws.amazon.com/ses/latest/DeveloperGuide/mail-from.html
I have primary domain verified and I have added the MX record to the DNS settings, which I can see on the mxtoolbox.com. However, the Custom MAIL FROM domain is still in the status of "pending verification".
Does amazon check it in batch(maybe once per hour) or those changes should be visible immediately? Or is there any place where there could be misconfiguration from my side, when I see the MX record visible? What can I do to successfully configure the Custom MAIL FROM domain?
Dig command has been verified with the MX record to the amazonses.
Spf record allow specifically designed ip's, without the -all option. Could that be the reason?
If anyone is struggling with the same problem, this few things help:
1) Remove Your custom MAIL FROM domain from SES
2) Add it one more time
That was the steps, that support gave us which also worked.
Simple "Turn off - Turn on" and everything works :)
I have a weird problem with PGAdmin4.
My setup
pgadmin 4.1 deployed on kubernetes using the chorss/docker-pgadmin4 image. One POD only to simplify troubleshooting;
Nginx ingress controller as reverse proxy on the cluster;
Classic ELB in front to load balance incoming traffic on the cluster.
ELB <=> NGINX <=> PGADMIN
From a DNS point of view, the hostname of pgadmin is a CNAME towards the ELB.
The problem
Application is correctly reachable, users can login and everything works just fine. Problem is that after a couple of (roughly 2-3) minutes the session is invalidated and users are requested to login again. This happens regardless of the fact that pgadmin is actively used or not.
After countless hours of troubleshooting, I found out that the problem happens when the DNS resolution of ELB's CNAME switches to another IP address.
In fact, I tried:
connecting to the pod directly by connecting to the k8s service's node port directly => session doesn't expire;
connecting to nginx (bypassing the ELB) directly => session doesn't expire;
mapping one of the ELB's IP addresses in my hosts file => session doesn't expire.
Given the above test, I'd conclude that the Flask app (PGAdmin4 is a Python Flask application apparently) is considering my cookie invalid after the remote address changes for my hostname.
Any Flask developer that can help me fix this problem? Any other idea about something I might be missing?
PGadmin 4 seems to use Flask-Security for authentication:
pgAdmin utilised the Flask-Security module to manage application security and users, and provides options for self-service password reset and password changes etc.
https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html
Flask-Security seems to use Flask-Login:
Many of these features are made possible by integrating various Flask extensions and libraries. They include:
Flask-Login
...
https://pythonhosted.org/Flask-Security/
Flask-Login seems to have a feature called "session protection":
When session protection is active, each request, it generates an identifier for the user’s computer (basically, a secure hash of the IP address and user agent). If the session does not have an associated identifier, the one generated will be stored. If it has an identifier, and it matches the one generated, then the request is OK.
https://flask-login.readthedocs.io/en/latest/#session-protection
I would assume setting login_manager.session_protection = None would solve the issue, but unfortunately I don't know how to set it in PGadmin. Hope it might help you somehow.
For those looking for a solution, You need to add below to config.py or config_distro.py or config_local.py
config_local.py
SESSION_PROTECTION = None
Faced similar issue in GKE Load balancer , Cleaner solution which worked for me is disabling cookie protection based on Ip address. Add below flag to config_local.py
#Disable Cookie generation base on Ip address
ENHANCED_COOKIE_PROTECTION = False
I give Up!
I don't know what to do anymore.
A have registred a domain at registro.br ("example.com.br") and point it to my cloud flare account servers.
On cloudFlare I've set 2 CNAME :
"www" and "example.com.br", both pointing to my heroku's app address.
And at my heroku's account I've set up the DNS to my domain example.com.br...
I'm using the apartment GEM and locally it's perfect. My project is based on the timetracker project.
But when I deploy it to heroku it redirects to "com.br".
I've already added the "example" as excluded_subdomain.rb.
The heroku log says that can't find 'public' or 'example' Tenant and redirects to "com.br"... WTF?
Thanks for any help
Ok, this answer is for everybody that hasn´t just ".com" at the end of your domain...
You just need to put into your production.rb file this code:
config.action_dispatch.tld_length = 2
In my case I have ".com.br" at the end, so I needed to tell rails that this has length equals 2 (com and br).
Thats it
We have been implementing GREG5.0 and using default configurations everything works fine. Once we replace the default localhost certificate in the wso2cabon.jks keystore with our own we receive "java.security.SignatureException: Signature length not correct: got 256 but was expecting 128" when we log into Store or Publisher using SSO.
We have removed the default keypair from wso2carbon.jks and added our own certificate. The password for our keystore and certificate are the same. We have updated all the configuration files per the wso2 carbon 4.4 documentation. We have updated JavaHome with local_policy.jar and us_export_policy.jar in order to allow for the longer key length.
The administrator console works great with no issues. If we change the login method of store or publisher to "basic" then it works fine. When we have the login method set to "SSO" we end up sitting on a blank page at this location https://servername/store/acs. We have the same result in the browser if we are running as a windows server or in console mode but, if we are running as a windows service then we have no error and no indication of what happened. If we are running in console mode then I get the error mentioned above spit out in the console.
I also noticed this behavior on Identity Server 5.0 when accessing dashboard.
We are running on windows.
Is there another location in WSO2 that I need to update to accomodate an increased key length?
Joe
The location I missed updating was the IdentityAlias in repository/deployment/server/jaggeryapps/store/config/store.json repository/deployment/server/jaggeryapps/publisher/config/publisher.json. Once I updated that value to match the alias of the keypair I was using in wso2carbon.jks that appeared as though it solved the keylength error and created another problem.
So now it was giving me a NullPointerException. I had provided the alias of our keypair but that was not the same as the alias for our certificate exported from our keypair that we loaded in client-truststore.jks. So I decided to set both alias' so they would match. With that change I was finally able to successfully able to access the store and publisher.
After some further testing it did not care what my keypair alias was as long as the value in IdentityAlias matched the alias of my certificate loaded in client-truststore.jks.
Hope this helps someone.
Joe
I have successfully deployed opencart on openshift, and It is running properly with the url provided by the openshift, But when I have mapped that URL with the CNAME in my domain name, It is showing Error that App is not found.
Can someone please help me for the same.
You may need to set your alias to the domain name you have chosen. You can do this via the web console. Aliases are what allow you to use your own domain names for your applications on OpenShift.
It's a 2 step process
(1) Set up the CNAME record with your DNS provider
it sounds like you have already done this at your DNS provider
(2) Configure OpenShift to use your alias
so from the web console, go to your application's main page by clicking on Settings icon then click on "Change" link to enter your custom domain name e.g. www.example.com or something.example.com.
Let us know if that works,
Diane