I have come across the https://jxbrowser.support.teamdev.com/support/solutions/articles/9000013108-network-events in the JXBrowser and wanted to add new cookies so that it could be used in the subsequent calls.
The support is available to add headers however since no direct access is available for the cookies I tried using the
public void onBeforeSendHeaders(BeforeSendHeadersParams paramBeforeSendHeadersParams)
{
List<Cookie> cookieList = browser.getCookieStorage().getAllCookies();
}
Also note that the calls of below snippet produces the same exception
browser.getURL(); //Exception is thrown here
CookieStorage storage = setCookies(paramBeforeSendHeadersParams, browser, list);
storage.save();// Exceptino is thrown here
but if i do this i get
java.lang.IllegalStateException: You are trying to execute some code that invokes synchronous message send to IPC channel. This code is executed in the scope of the handler which is bounded to synchronous message received from IPC channel. Such code execution causes a deadlock in native code with high probability and is forbidden.
What is the reasoning behind this any help is appreciated
As I understand, you want your application to share cookies between several Browser instances.
It is possible to make Two Browser instances with the same BrowserContext instances which use the same user data directory. As a result, they will share cookies and cache files. For example:
BrowserContext context = new BrowserContext(
new BrowserContextParams("C:\\my-data1"));
Browser browser1 = new Browser(context);
Browser browser2 = new Browser(context);
In this case, you should not receive the exception.
Related
I am not really seeking code examples, but I'm hoping someone can review my program design and provide feedback. I am trying to figure out how do I ensure I have one instance of my "workflow" running at a time.
I am working in C++.
This is my workflow:
I read rows off of a Postgres database.
If the table has any records, I want to do these instructions:
Read the records and transform them to JSON
Send the JSON document to a remote Web service
Parse the response from the service. The service tells me which records were saved or not saved, based on their primary key.
I delete the successfully saved records
I log the unsuccessful records (there's another process that consumes the logs and so my work is done).
I want to perform all of this threads using a separate thread (or "task", whatever higher-level abstraction is available in C++), and I want to make sure that if my function for [1] gets called multiple times, the additional calls basically get "dropped" if step 1 is already in flight.
In C++, I believe I can use a flag and a mutex. I use a something like std::lock_guard<std::mutex> at the top of my method. Then the next line checks for a flag.
// MyWorkflow.cpp
std::mutex myMutex;
int inFlight = 0;
void process() {
std::lock_guard<std::mutex> guard(myMutex);
if (inflight) {
return;
}
inflight = 1;
std::vector<Widget> widgets = readFromMyTable();
std::string json = getJson(&widgets);
... // Send the json to the remote service and handle the response
}
Okay, let me explain my confusion. I want to use Curl to perform the HTTP request. But Curl works asynchronously. And so if I make the asynchronous HTTP call via Curl, my update function will just return and myMutex will be released, right?
I think in my asynchronous response handler, I need to call a second function that's in MyWorkflow.cpp
void markCompletion() {
std::lock_guard<std::mutex> guard(myMutex);
inFlight = 0; // Reset the inflight flag here
}
Is this the right approach? I am worried that if an exception is thrown anywhere before I call markCompletion(), I will block all future callers. I think I need to ensure I have proper exception handling and always call markCompletion().
I am terribly sorry for asking such a noob question, but I really want to learn to do this the right way.
We have a deadlock situation which occured because of this heavy load on the microservice (Say A) causing multiple requests from different client services (B,C). So these calls from B and C come for the same clientId(key) and are served by different instances of A and they try to update the same clientId data in database at same time causing below error.
CannotAcquireLockException is thrown,
(SQL Error: 60, SQLState: 61000..
ORA-00060: deadlock detected while waiting for resource
We have decided to implement sharding at load balancer(haproxy) level which will ensure same instance of A will always serve the requests from B and C for a specific key(clientId), so we dont have multiple instances processing the request for same key(clientId).
Now we get into the mode of everything in single jvm as we have made sure requests from B and C for a specific clientId always come to same instance of A.
With this its still possible that requests from B and C services come for same clientId with difference in time of nanoseconds. Any then multiple threads will again try to update the same clientId data in database at same time causing same error again.
To improve this we are looking for possible solutions and one solutions is ReentrantReadWriteLock which should take care of this based on the concepts.
We are using spring data jpa and have a save being done which looks like
clientJpaRepository.save(ClientObject);
Now is it possible to use something like below.
public void save(Client clientObject) {
String clientId = clientObject.getClientId();
try {
boolean isLockAcquired = writeLock.tryLock(100, TimeUnit.MILLISECONDS);
if (isLockAcquired) {
clientJpaRepository.save(clientObject);
}
} catch (InterruptedException e) {
log.error("exception occured trying to acquire lock for clientId={}", clientId);
} finally {
writeLock.unlock();
}
}
I am not very sure how its going to deal with the keys. As in i don't want any threads to block if they are wanting to update for different key(clientId 2).
Also, other thing to note is there could be reads happening as part of other API calls for this data from database. They would not be waiting too long hopefully and i hope i don't need to make any changes there for the reads.
Sorry for the long question, Hope i will hear from someone soon.
Thanks.
I am using libwidevinecdm.so from chrome to handle DRM protected data. I am currently successfully setting the widevine server certificate I get from the license server. I can also create a session with the pssh box of the media im trying to decode. So far everything is successful (all promises resolve fine).
(session is created like this: _cdm->CreateSessionAndGenerateRequest(promise_id, cdm::SessionType::kTemporary, cdm::InitDataType::kCenc, pssh_box.data(), static_cast<uint32_t>(pssh_box.size()));)
I am then getting a session message of type kLicenseRequest which I am forwarding to the respective license server. The license server responds with a valid response and the same amount of data as I can see in the browser when using Chrome. I am then passing this to my session like this:
_cdm->UpdateSession(promise_id, session_id.data(), static_cast<uint32_t>(session_id.size()),
license_response.data(), static_cast<uint32_t>(license_response.size()));
The problem now is that this promise never resolves. It keeps posting the kLicenseRequest message over and over again to my session without ever returning. Does this mean my response is wrong? Or is this something else?
Br
Yanick
The issue is caused by the fact, that everything in CreateSessionAndGenerateRequest is done synchronous - that means by the time CreateSessionAndGenerateRequest returns your promise will always be resolved.
The CDM will emit the kLicenseRequest inside CreateSessionAndGenerateRequest and it doesn't do so in a "fire & forget" fashion, but the function waits there until you have returned from the cdm::Host_10::OnSessionMessage. Since my implementation of OnSessionMessage was creating a synchronous HTTP Request to the license server before - also synchronously - calling the UpdateSession the entire chain ended up to be blocking.
So ultimately I was calling UpdateSession while still being inside CreateSessionAndGenerateRequest and I assume the CDM cannot handle this and reacts by creating a new session with the given ID and generating a request again, which of course triggered another UpdateSession and so on.
Ultimately the simplest way to break the cycle was to make something asynchronous. I decided to launch a separate thread when receiving kLicenseRequest, wait for a few milliseconds to make sure that CreateSessionAndGenerateRequest has time to finish (not sure if that is really required) and then issue the request to the license server.
The only change I had to do was adding the surrounding std::thread:
void WidevineSession::forward_license_request(const std::vector<uint8_t> &data) {
std::thread{
[=]() {
std::this_thread::sleep_for(std::chrono::milliseconds{100});
net::HttpRequest request{"POST", _license_server_url};
request.add_header("Authorization", fmt::format("Bearer {}", _access_token))
.byte_body(data);
const auto response = _client.execute(request);
if (response.status_code() != 200) {
log->error("Widevine license request not accepted by license server: {} {} ({})", response.status_code(), response.status_text(), utils::bytes_to_utf8(response.body()));
throw std::runtime_error{"Error requesting widevine license"};
}
log->info("Successfully requested widevine license from license server");
_adapter->update_session(this, _session_id, response.body());
}
}.detach();
}
I am building a Rest client with cpprest-sdk to communicate with a web service. The problem is that every once in a while, after sending multiple successful requests (around 50), I get the exception:
WinHttpSendRequest: 2148074273 insufficient cache in function
Or sometimes:
ERROR_WINHTTP_SECURE_FAILURE (12175)
I tried to look for cache options in cpprest-sdk but did not find anything. Since the exceptions happens inside cpprest-sdk when I call .wait() on my task I am not sure if I can use the WINHTTP_STATUS_CALLBACK to check for more details on this error. How can I investigate deeper to find the cause of this error?
Here is my Rest request:
void MyRestClient::PostKeys(const std::string & sKek, const std::string & sKid, const std::string & sCustomerAuthenticator) {
uri_builder oBuilder(U("/keys?customerAuthenticator=") + to_string_t(sCustomerAuthenticator));
oBuilder.append_query(KEK, to_string_t(sKek));
json::value oBody;
oBody[KID] = json::value::string(to_string_t(sKid));
web::http::http_request oRequest;
oRequest.set_method(methods::POST);
oRequest.set_request_uri(oBuilder.to_uri());
oRequest.set_body(oBody);
m_oCurrentTask = oClient.request(oRequest).then([this](http_response oResponse) {
OnPostResponse(oResponse);
});
}
According to https://msdn.microsoft.com/en-us/library/windows/desktop/aa383928(v=vs.85).aspx (4th bullet), Post requests should not be cached so I don't understand why I am getting the first exception. I also tried to disable Https caching as the 6th bullet in the link suggest, but that did not change anything.
Did anyone experienced something similar or have any insight as to what may be happening? Or is this a normal behavior and should I just retry my request when these exceptions happens?
Does your Web Service use TLS with Diffie-Hellman key exchange? If yes, you are probably seeing a bug in SChannel, which is the SSL implementation of Windows, see here for a confirmation. Unfortunately, the only available fix is an update of the Windows version on which your client is running to a recent build of Windows 10.
I have a MVC app where I am trying to capture all the incoming requests in a ActionFilter. Here is the logging code. I am trying to log in a fire and forget model.
My issue is if I execute this code synchronously by taking out the Task.Run Elmah does send out an email. But for the code shown below I can see the error getting logged to the InMemory logger in elmah.axd but no emails.
public void Log(HttpContextBase context)
{
Task.Run(() =>
{
try
{
throw new NotImplementedException(); //simulating an error condition
using (var s = _documentStore.OpenSession())
{
s.Store(GetDataToLog(context));
s.SaveChanges();
}
}
catch (Exception ex)
{
ErrorSignal.FromCurrentContext().Raise(ex);
}
});
}
Got this answer from Atif Aziz (ELMAH Lead contributor) on the ELMAH google group:
When you use Task.Run, the HttpContext is not transferred to the thread pool thread on which your action will execute. When ErrorSignal.FromCurrentContext is called from within your action, my guess is that it's probably failing with another exception because there is no current context. That exception is lying with the Task. If you're on .NET 4, you're lucky because you'll see the ASP.NET app crash eventually (but possibly much after the fact) when the GC will kick in and collect the Task and its exception will go “unobserved”. If you're on .NET 4.5, the policy has been changed and the exception will simply get lost. Either way, your observation will be that mailing is not working. In fact, logging won't work either unless you use Elmah.ErrorLog.GetDefault(null).Log(new Error(ex)), where a null context is allowed. But that call only logs the error but does not do any mailing. ELMAH's modules are connected to the ASP.NET context. If you detach from that context by forking to another thread, then you cannot rely on ELMAH's modules. You can only use Elmah.ErrorLog.GetDefault(null).Log(new Error(ex)) reliably to log an error.