Drupal 8 |Mysql Gone Away Error when connection remote DB server - drupal-8

I am facing mysql gone away issue. Site was working fine past 2 days i am getting this issue.
I have tried all the combination max_allowed_packet , wait_timeout & Intractive_timeout still not working.
Tried with different Drupal instance in different servers & different DB servers same issue. Please help me to fix this issue. there is not network issue also ping , telnet working fine with 3306 port.
Additional uncaught exception thrown while handling exception.
Original
Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[HY000]: General error: 2006 MySQL server has gone away: INSERT INTO {cache_discovery} (cid, expire, created, tags, checksum, data, serialized) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6) ON DUPLICATE KEY UPDATE cid = VALUES(cid), expire = VALUES(expire), created = VALUES(created), tags = VALUES(tags), checksum = VALUES(checksum), data = VALUES(data), serialized = VALUES(serialized); Array ( [:db_insert_placeholder_0] => entity_type [:db_insert_placeholder_1] => -1 [:db_insert_placeholder_2] => 1621427865.35 [:db_insert_placeholder_3] => entity_types [:db_insert_placeholder_4] => 1 [:db_insert_placeholder_5] => a:8:{s:6:"action";O:42:"Drupal\Core\Config\Entity\ConfigEntityType":44:{s:16:"*config_prefix";N;s:15:"*static_cache";b:0;s:14:"*lookup_keys";a:1:{i:0;s:4:"uuid";}s:16:"*config_export";a:5:{i:0;s:2:"id";i:1;s:5:"label";i:2;s:4:"type";i:3;s:6:"plugin";i:4;s:13:"configuration";}s:21:"*mergedConfigExport";a:0:{}s:15:"*render_cache";b:1;s:19:"*persistent_cache";b:1;s:14:"*entity_keys";a:8:{s:2:"id";s:2:"id";s:5:"label";s:5:"label";s:8:"revision";s:0:"";s:6:"bundle";s:0:"";s:8:"langcode";s:8:"langcode";s:16:"default_langcode";s:16:"default_langcode";s:29:"revision_translation_affected";s:29:"revision_translation_affected";s:4:"uuid";s:4:"uuid";}s:5:"*id";s:6:"action";s:16:"*originalClass";s:27:"Drupal\system\Entity\Action";s:11:"*handlers";a:2:{s:6:"access";s:45:"Drupal\Core\Entity\EntityAccessControlHandler";s:7:"storage";s:45:"Drupal\Core\Config\Entity\ConfigEntityStorage";}s:19:"*admin_permission";s:18:"administer actions";s:25:"*permission_granularity";s:11:"entity_type";s:8:"*links";a:0:

Related

JASIG CAST - Old TGT TGC Cookie revalidation causes login loop

I am facing troubles using CAS JASIG 3.5.2.1.
I am using it for a long time, and this seems to be a new problem since my last deployment.
First time I log in, every is alright :
- TICKET_GRANTING_TICKET_CREATED
- SERVICE_TICKET_CREATED
After a while, ticket cleanup removes the TGT, which is correct :
- Ticket is expired due to the time since last use being greater than the timeToKillInMilliseconds
- Removing ticket [TGT-...
So when I go to my application, I am redirected on the login page.
Even if my browser still has the cookie with TGT information, and sends it to CAS, it refused, as it has been cleaned up, which is normal :
- Attempting to retrieve ticket [TGT-402...
- SERVICE_TICKET_NOT_CREATED
2019-03-07 09:57:58,929 DEBUG [org.springframework.webflow.mvc.view.AbstractMvcView] - <Rendering MVC [org.springframework.web.servlet.view.JstlView: name 'casLoginView'; URL [/WEB-INF/view/jsp/default/ui/casLoginView.jsp]] with model map [{flowRequestContext=[RequestControlContextImpl#6d171024 externalContext = org.springframework.webflow.mvc.servlet.MvcExternalContext#3dc20cd8, currentEvent = generated, requestScope = map[[empty]], attributes = map[[empty]], messageContext = [DefaultMessageContext#3520ba5b sourceMessages = map[[null] -> list[[empty]]]], flowExecution = [FlowExecutionImpl#524e370 flow = 'login', flowSessions = list[[FlowSessionImpl#38cfd0b3 flow = 'login', state = 'viewLoginForm', scope = map['loginTicket' -> 'LT-1505-Dnd7hs2pezvt51fj79MQnLxHfoZzew', 'service' -> http://preprod.enpc-center.fr/login, 'credentials' -> [username: null], 'warnCookieValue' -> false, 'ticketGrantingTicketId' -> 'TGT-402-ZxdMougCuYRVhFskdVBSiF7tqepwvRFx3FNtwR6Ktk3KQchM5L-preprod.sso.enpc-center.fr', 'viewScope' -> map['commandName' -> 'credentials']]]]]], flashScope=map[[empty]], currentUser=null, loginTicket=LT-1505-Dnd7hs2pezvt51fj79MQnLxHfoZzew, service=http://preprod.enpc-center.fr/login, org.springframework.validation.BindingResult.credentials=org.springframework.webflow.mvc.view.BindingModel: 0 errors, commandName=credentials, credentials=[username: null], flowExecutionKey=e1s1, warnCookieValue=false, flowExecutionUrl=/cas/login?service=%5BLjava.lang.String%3B%4077aac79c, ticketGrantingTicketId=TGT-402-ZxdMougCuYRVhFskdVBSiF7tqepwvRFx3FNtwR6Ktk3KQchM5L-preprod.sso.enpc-center.fr, viewScope=map['commandName' -> 'credentials']}]>
2019-03-07 09:57:59,437 DEBUG [org.jasig.cas.web.flow.TerminateWebSessionListener] - <Error getting service from flow state.>
java.lang.IllegalStateException: No active FlowSession to access; this FlowExecution has ended
Evreything seems to be alright at this point.
The problem is that when I log in, using credentials, the browser sent the existing TGT Cookie, so the CAS tries to retrieve it, and redirects me again aon login page.
I have to removed TGT cookie so I can successfully log in again.
Do you have any clue to this strange behaviour ?
Thank you in advance.
Do you have any clue to this strange behaviour ?
This is how CAS 3.5.x operated. The software did not immediately check the validity of the TGT that was linked to the cookie, and it only did that in certain situations when it wanted to do something specific with the TGT passed by the cookie. When a TGT was removed as part of the cleanup process, in some cases CAS/browser still showed you're successfully logged in, because the software only checked to see if a cookie existed and did it verify its relationship with its TGT. The best course of action is to close your browser, and as you note, clear cookies.
Newer CAS versions have fixed this problem.
PS: You may want to consider removing the TerminateWebSessionListener or increasing its timeout value to something larger.

Oracle Apex, apex_util.get_print_document not working over https

I am working on a project where I generate and store pdf in the database using apex_util.get_print_document.
It looks like this,
begin
l_report := apex_util.get_print_document (
p_application_id => :APP_ID,
p_report_query_name => 'query_name',
p_report_layout_name => 'layout_name',
p_report_layout_type => 'xsl-fo',
p_document_format => 'pdf' );
Update table_blob
Set report = l_report,
mimetype = 'application/pdf',
filename = :P1_INVOICE_NO||'.pdf',
report_saved_by = :USER,
report_saved_on = sysdate
Where Job_id = :P1_JOB;
End;
This works perfectly in http connection. So when I access the same page over https I am getting the below error. Please help!
ORA-20001: The printing engine could not be reached because either the URL specified is incorrect or a proxy URL needs to be specified.
For APEX to be able to make outbound HTTP calls over SSL, a database wallet must be created and the configuration of it must be specified in APEX Instance Administration:
http://docs.oracle.com/cd/E59726_01/doc.50/e39151/adm_wrkspc002.htm#sthref384

Route 53 alias record not working?

I previously had a website working on AWS. It was created & registered with AWS. It was setup in the hosted zone and point to an EC2 instance. Everything was working fine.
I got "smart" and created a load balancer, which pointed to the EC2 instance, and then I deleted the previous hosted zone record (and associated recordset) and re-added the hosted zone record which would point to the load balancer.
After much googling I determined I needed to add an "A" record, make it an alias and point it to the load balancer. All good so far.
Then I went to access the website in browser and Im getting ERR_NAME_NOT_RESOLVED. I waited hours for DNS servers to update and still no luck. Flushed DNS cache and no luck.
Ive changed multiple other things - tried www in front of name in recordset, tried a ptr record which pointed to load balancer DNS name, and even tried to sync the dns server names between the domain record and the hosted zone record. Still no luck. Same error.
Ive performed "nslookup debug" and honestly dont know what Im looking at.
C:\Users\sam>nslookup -debug abc.com
Got answer:
HEADER:
opcode = QUERY, id = 1, rcode = NOERROR
header flags: response, auth. answer, want recursion, recursion avail.
questions = 1, answers = 1, authority records = 0, additional = 0
QUESTIONS:
1.1.168.192.in-addr.arpa, type = PTR, class = IN
ANSWERS:
-> 1.1.168.192.in-addr.arpa
name = xyz
ttl = 0 (0 secs)
Server: xyz
Address: 192.168.1.1
Got answer:
HEADER:
opcode = QUERY, id = 2, rcode = SERVFAIL
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 0, additional = 0
QUESTIONS:
abc.com, type = A, class = IN
------------
Got answer:
HEADER:
opcode = QUERY, id = 3, rcode = SERVFAIL
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 0, additional = 0
QUESTIONS:
abc.com, type = AAAA, class = IN
------------
Got answer:
HEADER:
opcode = QUERY, id = 4, rcode = SERVFAIL
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 0, additional = 0
QUESTIONS:
abc.com, type = A, class = IN
------------
Got answer:
HEADER:
opcode = QUERY, id = 5, rcode = SERVFAIL
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 0, additional = 0
QUESTIONS:
abc.com, type = AAAA, class = IN
*** xyzcan't find abc.com: Server failed
Im sure its something dumb. But Ive spent too much time on this and cant think anymore.
What did I do wrong?
Thanks for your help.
even tried to sync the dns server names between the domain record and the hosted zone record.
If that was necessary, then it sounds like at some point you deleted and recreated the hosted zone... which does not work the way you may have anticipated.
The simplest way out of that, is this:
Leaving the existing zone exactly as it is, create a new hosted zone with the same domain. (Yes, this works).
Note the four name servers assigned for the new hosted zone.
Go to the domain record (the registrar component of Route 53, not the hosted zone component) and change the 4 name servers to match those assigned to your new hosted zone.
In the new hosted zone, create a new A record, hostname box empty, Alias = Yes, and select the ELB name.
Once it's working, delete the old hosted zone.

Rack-attack and Allow2Ban filtering in rails 4

I'm implementing Kickstarter's Rack-attack in my rails app.
The whitelist/blacklist filtering is working properly, but I'm having issues with using Allow2Ban to lock out ip addresses that are hammering my sign_in (Devise) page. Note: im testing this locally and have removed localhost from the whitelist.
# Lockout IP addresses that are hammering your login page.
# After 3 requests in 1 minute, block all requests from that IP for 1 hour.
Rack::Attack.blacklist('allow2ban login scrapers') do |req|
# `filter` returns false value if request is to your login page (but still
# increments the count) so request below the limit are not blocked until
# they hit the limit. At that point, filter will return true and block.
Rack::Attack::Allow2Ban.filter(req.ip, :maxretry => 3, :findtime => 1.minute, :bantime => 1.hour) do
# The count for the IP is incremented if the return value is truthy.
req.path == '/sign_in' and req.post?
end
end
In the Rack-attack documentation, it clearly states that caching is required for throttling functionality, ie:
Rack::Attack.throttle('req/ip', :limit => 5, :period => 1.second) do |req| )
, but it doesn't state this for Allow2Ban. Anyone know if cache is required for Allow2Ban, or am I implementing incorrectly with the code above on a Devise sign_in page
Yes, Allow2Ban and Fail2Ban definitely need chaching (in https://github.com/kickstarter/rack-attack/blob/master/lib/rack/attack/fail2ban.rb you can see how and why).
Btw. I suggest to use Redis as cache because it ensures that your application blocks an IP address even if you are using more than one application node. If you are using Rails cache in a multi-application node scenario, your filters will be managed per instance, which is not what you would want I assume.

SignatureDoesNotMatch in Amazon API

I am using the Amazon API and get this error while updating my stock from my database to Amazon website:
Caught Exception: Internal Error
Response Status Code: 0
Error Code:
Error Type:
Request ID:
XML:
I read this thread (amazonsellercommunity . com/forums/thread.jspa?messageID=2194823) and then get the error explanation:
<Error><Type>Sender</Type><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message><Detail/></Error>
So I thought my MARKETPLACE_ID, MERCHANT_ID, AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY could be wrong. But I checked and these informations are correct.
Actually, I don't understand why this error happens... Before, it worked perfectly and since a couple of days it just crash. And I don't change anything in my code. Strange, isn't it?
Edit :
Here is my section code for signature.
define ('DATE_FORMAT', 'Y-m-d\TH:i:s\Z');
define('AWS_ACCESS_KEY_ID', 'ABC...'); // My AWS Access Key Id (20 characters)
define('AWS_SECRET_ACCESS_KEY', 'ABCDEF...'); // My AWS Secret Access Key (40 characters)
define('APPLICATION_NAME', 'MyCompany_AmazonMWS');
define('APPLICATION_VERSION', '0.0.1');
define ('MERCHANT_ID', 'XXXXXXX'); // My Merchant ID
define ('MARKETPLACE_ID', 'XXXXXXX'); // My Marketplace ID
$config = array (
'ServiceURL' => "https://mws.amazonservices.fr",
'ProxyHost' => null,
'ProxyPort' => -1,
'MaxErrorRetry' => 3,
);
$service = new MarketplaceWebService_Client(
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY,
$config,
APPLICATION_NAME,
APPLICATION_VERSION
);
$parameters = array (
'Marketplace' => MARKETPLACE_ID,
'Merchant' => MERCHANT_ID,
'FeedType' => '_POST_INVENTORY_AVAILABILITY_DATA_',
'FeedContent' => $feedHandle,
'PurgeAndReplace' => false,
'ContentMd5' => base64_encode(md5(stream_get_contents($feedHandle), true)),
);
// and then I do this:
$request = new MarketplaceWebService_Model_SubmitFeedRequest($parameters);
invokeSubmitFeed($service, $request);
If you want to see some parts of my code, just ask.
If I recall correctly, the authentication mechanism for Amazon APIs is sensitive to the current date/time on your machine (which is used in the process of signing the request). Check to see if your date/time is set correctly.
For me it was just an error with my web app passing url escaped strings. The special characters weren't like by amazon and this (not so useful) error came up. Make sure your file names have no url escaped characters.
I solved it (on Ubuntu 14.04 Server) using ntpdate:
First make sure it is installed:
apt-get install ntpdate
And then execute:
ntpdate ntp.ubuntu.com