I'm trying to build my application using the Phantom Omni haptic device and in order to get the angular velocity of the device, I'm using a function from its library (OpenHaptics):
hdGetDoublev(HD_CURRENT_ANGULAR_VELOCITY, ang_vel);
but it returns [0,0,0]. I'm using the same function to get the linear velocity:
hdGetDoublev(HD_CURRENT_VELOCITY, lin_vel);
and it's working fine. (Both lin_vel and ang_vel are defined as hduVector3Dd)
What am I missing?
I asked directly to Open Haptics support and this was the answer: "This is not a bug, HD_CURRENT_ANGULAR_VELOCITY doesn't apply to Touch/Omni model, because its gimbal encoder wouldn't be sufficient for accurate angular velocity calculation".
I hope it can save you some time.
Related
I have a video with 3 persons speaking and I would like to annotate the location of people's eyes during it. I know that the Google Video Intelligence API has functionalities for object tracking, but it's possible to handle such an eye-tracking process using the API?
Google Video Intelligence API represents Face detection feature, which gives you opportunity to perform face detection from within video frames as well as special face attributes.
In general, you need to adjust FaceDetectionConfig throughout videos.annotate method, supplying includeBoundingBoxes and includeAttributes arguments in JSON request body:
{
"inputUri":"string",
"inputContent":"string",
"features":[
"FACE_DETECTION"
],
"videoContext":{
"segments":[
"object (VideoSegment)"
],
"faceDetectionConfig":{
"model":"string",
"includeBoundingBoxes":"true",
"includeAttributes":"true"
}
},
"outputUri":"string",
"locationId":"string"
}
There is a detailed (Python) example from Google on how to track objects and print out detected objects afterward. You could combine this with the AIStreamer live object tracking feature, to which you can upload a live video stream to get results back.
Some ideas/steps you could follow:
Recognize the eyes in the first frame of the video.
Set/highlight a box around the eyes you are tracking.
Track the eyes as an object in the next frames.
I'm currently developing a simple application for querying/retrieving data on a PACS. I use DCMTK for this purpose, and a DCM4CHEE PACS as test server.
My goal is to implement simple C-FIND queries, and a C-MOVE retrieving system (coupled with a custom SCP to actually download the data).
To do so, I've created a CustomSCU class, that inherits the DCMTK DcmSCU class.
I first implemented a C-ECHO message, that worked great.
Then, I tried to implement C-FIND requesting, but I got the error "DIMSE No valid Presentation Context ID" (more on that in the next paragraph) from my application, but no other log from DCM4CHEE. I've then used the command tool findscu (from dcmtk) to see if there was some configuration issue but the tool just worked fine. So in order to implement my C-FIND request, I've read the source of findscu (here) and adapted it in my code (meaning that i'm not using DcmSCU::sendCFindRequest but the class DcmFindSU).
But now, i'm facing the same problem with C-MOVE request. My code is pretty straight-forward :
//transfer syntaxes
OFList<OFString> ts;
ts.push_back(UID_LittleEndianExplicitTransferSyntax);
ts.push_back(UID_BigEndianExplicitTransferSyntax);
ts.push_back(UID_LittleEndianImplicitTransferSyntax);
//sop class
OFString pc = UID_MOVEPatientRootQueryRetrieveInformationModel;
addPresentationContext(pc, ts);
DcmDataset query;
query.putAndInsertOFStringArray(DCM_QueryRetrieveLevel, "PATIENT");
query.putAndInsertOFStringArray(DCM_PatientID, <ThePatientId>);
OFCondition condition = sendMOVERequest(findPresentationContextID(pc, ""), getAETitle(), &query, nullptr);
return condition.good();
I've also tried using UID_MOVEStudyRootQueryRetrieveInformationModel instead of UID_MOVEPatientRootQueryRetrieveInformationModel, with the same result : my application shows the error
DIMSE No valid Presentation Context ID
As I understand, a presentation context is concatenation of one or more transfer syntax and one SOP class. I read that the problem could come from the PACS that won't accept my presentation contexts. To be sure, I used the movescu tool (from DCMTK). It worked, and I saw this in the logs from de server DCM4CHEE :
received AAssociatedRQ
pc-1 : as=<numbers>/Patient Root Q/R InfoModel = FIND
ts=<numbers>/Explicit VR Little Endian
ts=<numbers>/Explicit VR Big Endian
ts=<numbers>/Implicit VR Little Endian
That means that the movescu tool does a find before attempting an actual move ?
Therefore, I changed my application context creation with :
OFList<OFString> ts;
ts.push_back(UID_LittleEndianExplicitTransferSyntax);
ts.push_back(UID_BigEndianExplicitTransferSyntax);
ts.push_back(UID_LittleEndianImplicitTransferSyntax);
OFString pc1 = UID_FINDPatientRootQueryRetrieveInformationModel;
OFString pc = UID_MOVEPatientRootQueryRetrieveInformationModel;
addPresentationContext(pc1, ts);
addPresentationContext(pc, ts);
(also tried study root)
But this didn't do the trick.
The problem seems to lie on the client side, as findPresentationContextID(pc, ""); alwasy return 0, no matter what.
I don't feel like it's possible to adapt the code of the movescu tool, as it appears to be very complex and not adequat for simple retrieve operations.
I don't know what to try. I hope someone can help me understand what's going on. That's the last part of my application, as the storage SCP already works.
Regards
It looks like you are not negotiating the association with the PACS.
After adding the presentation contexts and before sending any command, the SCU must connect to the PACS and negotiate the PresentationContexts with DcmSCU::initNetwork and then DcmSCU::negotiateAssociation.
I'm using Beam to build a streaming ML pipeline that's very similar to Zeitgeist system mentioned in the original MillWheel paper. However, I'm have difficulty in using trained models to do online predictions (the blue vertical arrow "Models" in Figure 1. of the paper).
It seems that Zeitgeist models are updated incrementally each time (?) when a new windowed counter comes in. However, the specific model I'm using doesn't support incremental/online training, so I need to use some trigger/windowing to train the model in batches.
During prediction, I don't know how to align windows of features and batch-trained models.
using side input (see below) can make the pipeline run, but it stucks at the prediction step waiting for the model to be materialized.
CoGroupByKey does not work because the windows of features and ensembledModels are not the same.
I also want to do model ensemble, which makes things even more complicated.
Here's a rough sketch of what I have
// Correspond to "Window Counter" in MillWheel paper
final PCollection<Feature> features = pipeline
.apply(PubsubIO.read(...))
.apply(...some windowing...)
.apply(ParDo.of(new FeatureExtractor()));
// Correspond to "Model Calculator" in MillWheel paper
final PCollection<Iterable<Model>> ensembledModels = features
.apply(...some windowing and data triggers...)
.apply(new ModelTrainer()));
// Correspond to "Spike/Dip Detector" in MillWheel paper
// The pipeline stucks at here when run
final PCollection<Score> score = features
.apply(ParDo.of(new Predictor()).withSideInput(ensembledModels));
I'm using the libPusher pod in a Ruby Motion project but running into an issue where my code works when used in the REPL but not in the app itself.
When I try this code in a viewDidAppear method it connects successfully and then disconnects during the channel subscription call.
When I try it in the console, it connects and subscribes perfectly. (same code)
I'm trying to figure out:
Why this is happening
What should I change to alleviate the issue?
I'm using v 1.5 of the pod v2.31 of Ruby Motion
For reference, I'm also using ProMotion framework but I doubt that has anything to do with the issue.
Here's my code:
client = PTPusher.pusherWithKey("my_pusher_key_here", delegate:self, encrypted:true)
client.connect
channel = client.subscribeToChannelNamed("test_channel_1")
channel.bindToEventNamed('status', target: self, action: 'test_method:')
Well I got it working by separating the connection and subscription calls into separate lifecycle methods.
I put:
client = PTPusher.pusherWithKey("my_pusher_key_here", delegate:self, encrypted:true)
client.connect
into the viewDidLoad method
and:
channel = client.subscribeToChannelNamed("test_channel_1")
channel.bindToEventNamed('status', target: self, action: 'test_method:')
into the viewDidAppear method.
I can't say I know exactly why this worked but I assume it has to do with the time between the calls. The connection process must need a little time to complete.
I need to change the time working unit of Maya using the API.
(see Window->Settings/Preferences->Preferences->Settings->Working Units->Time)
So I do:
MTime::Unit mayaTime = MTime::k120FP;
status = MTime::setUIUnit(mayaTime);
[import some animated data]
// For debug
MTime::Unit tm = MTime::uiUnit();
tm is k120FPS so it is ok. Also, animated data are ok. BUT, when I open the GUI, time working units is still the default one...
The documentation says:
"MTime::setUIUnit: Set the unit system to be used by the user in the UI. After the successful completion of this method, Maya's timeslider will be displaying frames in the specified units."
Do you see what I did wrong here?
Thanks for any help.
Try setting the optionVar "workingUnitTime".