Widevine Session Update endless Loop - c++

I am using libwidevinecdm.so from chrome to handle DRM protected data. I am currently successfully setting the widevine server certificate I get from the license server. I can also create a session with the pssh box of the media im trying to decode. So far everything is successful (all promises resolve fine).
(session is created like this: _cdm->CreateSessionAndGenerateRequest(promise_id, cdm::SessionType::kTemporary, cdm::InitDataType::kCenc, pssh_box.data(), static_cast<uint32_t>(pssh_box.size()));)
I am then getting a session message of type kLicenseRequest which I am forwarding to the respective license server. The license server responds with a valid response and the same amount of data as I can see in the browser when using Chrome. I am then passing this to my session like this:
_cdm->UpdateSession(promise_id, session_id.data(), static_cast<uint32_t>(session_id.size()),
license_response.data(), static_cast<uint32_t>(license_response.size()));
The problem now is that this promise never resolves. It keeps posting the kLicenseRequest message over and over again to my session without ever returning. Does this mean my response is wrong? Or is this something else?
Br
Yanick

The issue is caused by the fact, that everything in CreateSessionAndGenerateRequest is done synchronous - that means by the time CreateSessionAndGenerateRequest returns your promise will always be resolved.
The CDM will emit the kLicenseRequest inside CreateSessionAndGenerateRequest and it doesn't do so in a "fire & forget" fashion, but the function waits there until you have returned from the cdm::Host_10::OnSessionMessage. Since my implementation of OnSessionMessage was creating a synchronous HTTP Request to the license server before - also synchronously - calling the UpdateSession the entire chain ended up to be blocking.
So ultimately I was calling UpdateSession while still being inside CreateSessionAndGenerateRequest and I assume the CDM cannot handle this and reacts by creating a new session with the given ID and generating a request again, which of course triggered another UpdateSession and so on.
Ultimately the simplest way to break the cycle was to make something asynchronous. I decided to launch a separate thread when receiving kLicenseRequest, wait for a few milliseconds to make sure that CreateSessionAndGenerateRequest has time to finish (not sure if that is really required) and then issue the request to the license server.
The only change I had to do was adding the surrounding std::thread:
void WidevineSession::forward_license_request(const std::vector<uint8_t> &data) {
std::thread{
[=]() {
std::this_thread::sleep_for(std::chrono::milliseconds{100});
net::HttpRequest request{"POST", _license_server_url};
request.add_header("Authorization", fmt::format("Bearer {}", _access_token))
.byte_body(data);
const auto response = _client.execute(request);
if (response.status_code() != 200) {
log->error("Widevine license request not accepted by license server: {} {} ({})", response.status_code(), response.status_text(), utils::bytes_to_utf8(response.body()));
throw std::runtime_error{"Error requesting widevine license"};
}
log->info("Successfully requested widevine license from license server");
_adapter->update_session(this, _session_id, response.body());
}
}.detach();
}

Related

Using a lock in C++ across multiple tasks

I am not really seeking code examples, but I'm hoping someone can review my program design and provide feedback. I am trying to figure out how do I ensure I have one instance of my "workflow" running at a time.
I am working in C++.
This is my workflow:
I read rows off of a Postgres database.
If the table has any records, I want to do these instructions:
Read the records and transform them to JSON
Send the JSON document to a remote Web service
Parse the response from the service. The service tells me which records were saved or not saved, based on their primary key.
I delete the successfully saved records
I log the unsuccessful records (there's another process that consumes the logs and so my work is done).
I want to perform all of this threads using a separate thread (or "task", whatever higher-level abstraction is available in C++), and I want to make sure that if my function for [1] gets called multiple times, the additional calls basically get "dropped" if step 1 is already in flight.
In C++, I believe I can use a flag and a mutex. I use a something like std::lock_guard<std::mutex> at the top of my method. Then the next line checks for a flag.
// MyWorkflow.cpp
std::mutex myMutex;
int inFlight = 0;
void process() {
std::lock_guard<std::mutex> guard(myMutex);
if (inflight) {
return;
}
inflight = 1;
std::vector<Widget> widgets = readFromMyTable();
std::string json = getJson(&widgets);
... // Send the json to the remote service and handle the response
}
Okay, let me explain my confusion. I want to use Curl to perform the HTTP request. But Curl works asynchronously. And so if I make the asynchronous HTTP call via Curl, my update function will just return and myMutex will be released, right?
I think in my asynchronous response handler, I need to call a second function that's in MyWorkflow.cpp
void markCompletion() {
std::lock_guard<std::mutex> guard(myMutex);
inFlight = 0; // Reset the inflight flag here
}
Is this the right approach? I am worried that if an exception is thrown anywhere before I call markCompletion(), I will block all future callers. I think I need to ensure I have proper exception handling and always call markCompletion().
I am terribly sorry for asking such a noob question, but I really want to learn to do this the right way.

Play Framework how to purposely delay a response

We have a Play app, currently using version 2.6. We are trying to prevent dictionary attacks against our login by delaying a "failed login" message back to our users when they provide a failed password. We currently hash and salt and have all the best practices, but we are not sure if we are delaying correctly. So we have in our Controller:
public Result login() { return ok(loginHtml) }
and we have a:
public Result loginAction()
{
// Check for user in database
User user = User.find.query()...
// Was the user found?
if (user == null) {
// Wrong password! Delay and redirect
Thread.sleep(10000); <<-- how do delay correctly?
return redirect(routes.Controller.login())
}
// User is not null, so all good!
...
}
We are not sure if Thread.sleep(10000) is the best way to delay a response since this might hang other requests that come in, or use too many thread from the default pool. We have noticed that under 80+ hits per second the Play Framework does not route our HTTP calls to the Routes. That is, if we receive a HTTP POST request, our app will not even send that request to the Controller until 20+ seconds later, HOWEVER, in the SAME time period if we get a HTTP GET request, our app will process that GET instantly!
Currently we have 300 threads as the min/max in our Akka settings for the default fork pool. Any insights would be appreciated. We run a t2.xlarge AWS EC2 instance running Ubuntu.
Thank you.
Thread.sleep causes current thread blocking, please, try to avoid using it in production code as much as possible.
What you need to use, is CompletionStage / CompletableFuture or any abstraction for deeling with async programming and asynchronous action.
Please, take a look for more details about asynchronios actions: https://www.playframework.com/documentation/2.8.x/JavaAsync
In your case solution would look like something too (excuse me, please, this might have mistakes - I'm Scala engineer primary):
import play.libs.concurrent.HttpExecutionContext;
import play.mvc.*;
import javax.inject.Inject;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
public class LoginController extends Controller {
private HttpExecutionContext httpExecutionContext;
// Create and inject separate ScheduledExecutorService
private ScheduledExecutorService executor;
#Inject
public LoginController(HttpExecutionContext ec,
ScheduledExecutorService executor) {
this.httpExecutionContext = ec;
this.executor = executor;
}
public CompletionStage<Result> loginAction() {
User user = User.find.query()...
if (user == null) {
return executor.schedule(() -> {redirect(routes.Controller.login());}, 10, TimeUnit.SECONDS);
} else {
// return another response
}
}
}
Hope this helps!
I don't like this approach at all. This hogs threads for no reason and can probably cause your entire system to lock up if someone finds out you are doing this and they have malicious ideas. Let me propose a better approach:
In the User table store a nullable LocalDateTime of the last login attempt time.
When you fetch the user from the DB check the last attempt time (compare to LocalDateTime.now()), if 10 secs have passed since last attempt perform the password comparison.
If passwords don't match store the last attempt time as now.
This can also be handled gracefully on the front end if you provide good error responses.
EDIT: If you want to delay login attempts NOT based on the user, you could create an attempt table and store last attempt by IP address.
If you really want to do your way which I don't recommend you need to read up on this first: https://www.playframework.com/documentation/2.8.x/ThreadPools

JXBrowser modify cookies

I have come across the https://jxbrowser.support.teamdev.com/support/solutions/articles/9000013108-network-events in the JXBrowser and wanted to add new cookies so that it could be used in the subsequent calls.
The support is available to add headers however since no direct access is available for the cookies I tried using the
public void onBeforeSendHeaders(BeforeSendHeadersParams paramBeforeSendHeadersParams)
{
List<Cookie> cookieList = browser.getCookieStorage().getAllCookies();
}
Also note that the calls of below snippet produces the same exception
browser.getURL(); //Exception is thrown here
CookieStorage storage = setCookies(paramBeforeSendHeadersParams, browser, list);
storage.save();// Exceptino is thrown here
but if i do this i get
java.lang.IllegalStateException: You are trying to execute some code that invokes synchronous message send to IPC channel. This code is executed in the scope of the handler which is bounded to synchronous message received from IPC channel. Such code execution causes a deadlock in native code with high probability and is forbidden.
What is the reasoning behind this any help is appreciated
As I understand, you want your application to share cookies between several Browser instances.
It is possible to make Two Browser instances with the same BrowserContext instances which use the same user data directory. As a result, they will share cookies and cache files. For example:
BrowserContext context = new BrowserContext(
new BrowserContextParams("C:\\my-data1"));
Browser browser1 = new Browser(context);
Browser browser2 = new Browser(context);
In this case, you should not receive the exception.

How to lock a long async call in a WebApi action?

I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/

How to display remote email message?

I have been using this code to display IMAP4 messages:
void DisplayMessageL( const TMsvId &aId )
{
// 1. construct the client MTM
TMsvEntry indexEntry;
TMsvId serviceId;
User::LeaveIfError( iMsvSession->GetEntry(aId, serviceId, indexEntry));
CBaseMtm* mtm = iClientReg->NewMtmL(indexEntry.iMtm);
CleanupStack::PushL(mtm);
// 2. construct the user interface MTM
CBaseMtmUi* uiMtm = iUiReg->NewMtmUiL(*mtm);
CleanupStack::PushL(uiMtm);
// 3. display the message
uiMtm->BaseMtm().SwitchCurrentEntryL(indexEntry.Id());
CMsvOperationWait* waiter=CMsvOperationWait::NewLC();
waiter->Start(); //we use synchronous waiter
CMsvOperation* op = uiMtm->OpenL(waiter->iStatus);
CleanupStack::PushL(op);
CActiveScheduler::Start();
// 4. cleanup for example even members
CleanupStack::PopAndDestroy(4); // op,waiter, mtm, uimtm
}
However, in case when user attempts to download a remote message (i.e. one of the emails previously not retrieved from the mail server), and then cancels the request, my code remains blocked, and it never receives information that the action was canceled.
My question is:
what is the workaround for the above, so the application is not stuck?
can anyone provide a working example for asynchronous call for opening remote messages which do not panic and crash the application?
Asynchronous calls for POP3, SMTP and local IMAP4 messages work perfectly, but remote IMAP4 messages create this issue.
I am testing these examples for S60 5th edition.
Thank you all in advance.
First of all, I would retry removing CMsvOperationWait and deal with the open request asynchronously - i.e. have an active object waiting for the CMsvOperation to complete.
CMsvOperationWait is nothing more than a convenience to make an asynch operation appear synchronous and my suspicion is that this is culprit - in the case of download->show message, there are two asynch operations chained.