Prolog beginner question about scheduling and assigning groups - list

It's descended into just randomness since I obviously completely can't feel the logic of this logic, even though over a year ago I felt I was getting somewhere, then didn't look at it until now, and it's all just a complete random mystery of what thing is stopping this from working.
pref(mark,a).
pref(bart,a).
get_pref(A,[X|Y]):- pref(X,A).
get_pref(A,[X|Y]):- pref(X,A),get_pref(A,Y).
timetable([Cl,L1],[C2,L2]):-
All = [S1,S2,S3,S4],
L1 = [S1,S2],
L2 = [S3,S4],
get_pref(Cl,L1),
get_pref(C2,L2),
list_to_set(All,All).
with the query timetable([a,[S1,S2]],[b,[S3,S4]])
No results are found. Then I added pref(mary,b). and pref(sue,b). and I get 16 results which at least don't have duplicated names show switched position of names as well as oddly enough a _ variable either no times, once or twice.
In the initial statements, or predicates (if that's what they're called), I've seen some tutorials show a get_pref(A,[]):- additionally, looking like it's 'setting something up', sometimes a :- ! on things that all seem roughly related but it's all gibberish the way my dyslectic/adhd/autistic brain interprets the examples and tutorials.
I am trying to get towards building a comprehensive scheduling algorithm but these first stages are a time-consuming nightmare of trial and error. Do you have any pointers on the direction and strategy to take with this, or links to sites or videos that actually, once and for all, allow those of us who are used to different forms of problem solving, to bridge the gap?
Thank you!

Related

How to link multiple ports from a Expression to multiple groups of a Union

I add an image in order to explain myself better.
I have 300 something ports in a expression. I have created the equivalent number of groups in a union. I want each port of this expression to go to a port/field of the Union. One to one relationship. It seems like powercenter is not able to do this with autolink, or at least I'm unable to find the proper way to do this. How could I work arround this issue? Because I've been told that is likely that in a few days it will be more than 700 ports, and the amount it takes to do by hand is quite insane. Thanks in advance.
I'm surprised it validates... union is for homogenous sources but you seem to be trying to pivot your data (in which case I'd suggest using another transformation i.e. a normalizer and Informatica will start behaving as expected)
Possible solution: make a bunch of connections, save and export the file as xml, go to the lines when the connections are done, and replace that zone with as many rows as you need.
What I did specifically was to get the original rows, change the names as appropiate with the help of notepad++ and excel, and then go back to the original file and replace all of it. Check everything three times, and import the file back to powercenter.
I say possible solution because it's messy and dirty, but even though it may lead to mistakes I feel like the amount is vastly inferior and you have the versioning on your side, so just save before exporting. If someone with more experience could tell me it's thoughts about this, it would be a great opportunity to learn, just leaving this in case it goes unanswered

Good coding patterns in RShiny applications?

What are some good resources/examples for good coding patterns to follow in RShiny applications?
I feel I am following two bad patterns in the shiny applications I am creating.
To make things react to user changes properly, I seem to end up wrapping most parts of server.r in observe().
At the beginning of each observe(), I want the expression to rerun if any one of a whole bunch of inputs change.
Ideally, I would like to put input[change_set] where change_set is a character vector of input names, however this gives an error of Error in [.reactivevalues: Single-bracket indexing of reactivevalues object is not allowed.
(or if I use input[[change_set]]: Error in checkName: Must use single string to index into reactivevalues)
What I end up doing to make things work is including multiple lines of input$var1, input$var2, ..., input$var15. This feels very wrong.
I am not making use of any functions like: reactive(), reactiveValues(), isolate(), withReactiveDomain(), makeReactiveBinding(), ... . I am guessing that I probably should be, but I don't know how to use them.
The solution to this problem is likely to be me rereading the small print in the documentation and reading code from example applications. Does anybody know any good quality resources for this?

FFTW 3.3.3 basic usage with real datas

I'm a newbie in FFT and I was asked to find a way to analyse/process a particular set of data collected by oil drilling rigs.
There is a lot of noise in the collected data due to rig movements (up & down with tides and waves for example).
I was asked to clean the collected data up with FFT=>filtering=>IFFT.
I use C++ and the FFTW 3.3.3 library.
An example is better than anything else so :
I have a DB with, for example, the mudflow (liters per minutes). The mudflow is collected every 5 seconds, there is a timestamp in the DB for every measure (ex. 1387411235).
So my IN_data for my FFT is a couple of timestamp/mudflow (ex. 1387456630/3955.94, 1387456635/3954.92, etc...)
Displaying theses data really looks like a noisy sound signal and relevant events may be masked by the noise.
Using examples found on the Internet I can manage to perform FFT but my lack of knowledge and understanding is a big problem as I've never worked on signal processing and Fourier Transforms.
I don't really know how to proceed to start with this job, which version of FFTW routine to use (c2c, r2c, etc...), if there is any pre-data-processing and/or post-processing to do.
There are a lot of examples and tutorials that I've read on the internet but I'm french (sorry for my mistakes here) and it doesn't always make sense to me especially with OUT_data units, OUT_data type, In and Out data array size, windowing (what is that by the way), to put it in a nutshell I'm lost...
I suppose that my problem would be pretty straightforward for someone used to FFTW but for me it's very complicated right now.
So my questions :
What FFTW routine to use in both ways (FFT & IFFT) (what kind, type and size, of array for IN_data and OUT_data).
How to interpret the resulting array (what are the units that FFTW will return).
For now a short sample of what I've done is :
fftw_plan p;
p = (fftw_plan)fftw_plan_dft_1d(size,in,out,FFTW_FORWARD,FFTW_ESTIMATE);
fftw_execute(p);
fftw_destroy_plan(p);
with "in" and "out" as fftw_complex (the complex element of my In_data array is set to 1 for every data, don't really know why but the tutorial said to do that).
This code is based on an example found on the Internet but my lack of knowledge/understanding is a big drag and I was wondering if there was someone here who could give me explanations/workflow/insights/links on how to pull this out.
I'm in a trial period for my new job and I really want to implement this feature for my boss even if it means asking around for help, I've seen a lot of FFTW skilled posts here...
This is quite an ambitious project for someone who is completely new to DSP, but you can start by reading about the overlap-add method, which is essentially the method you need for your FFT-filter-IFFT approach to cleaning up this data. You should also check out the DSP StackExchange site dsp.stackexchange.com, where the theoretical background and application of frequency domain filtering is covered in several similar questions/answers.

Generate words that fit in Guids (just for fun)

I have some tests that use guids. The guids used don't need to be enormously unique, they just need to be guids. Random guids are boring - so I'm trying to find fun guid words. Right now, I don't have anything better than "00000000-feed-dada-iced-c0ffee000000". Ideally I'd generate a list of verbs, nouns, prepositions.
Having only spent a few minutes on this problem, here's where I am:
I have a word list (somewhat
large) from puzzlers.org.
Apply
this regex to identify words that
could be used in a Guid (o=0, i=1)
^[ABCDEFOI]{1,8}$
Squint.
Why doesn't someone have a funny guid generator available for my immediate gratification? How would you approach this? Any suggestions on how to improve this special guid generation process are welcome.
The solution you started is exactly how I would approach it. And it looks like someone already did the work for you:
http://nedbatchelder.com/text/hexwords.html
This isn't a technical answer but:
The Daily WTF had a post a while back, describing a guy who wrote the exact type of thing that you are trying to create, the reason it was Daily WTF material is because the generator ended up spitting out things that sounded like curse words.
From The Daily WTF - The Automated Curse Generator
Markov chains!" he blurted. "We can use statistical textual analysis to generate random words built up from natural phonemic combinations. They won't be real words, but they will match expected English patterns, and people will be able to pronounce and read them completely naturally."
I bet if you read that post you will get ideas about how to improve upon what you already have working.

Is there a way to build an easy related posts app in django

It seems to by my nightmare for the last 4 weeks,
I can't come up with a solution for a "related posts" app in django/python in which it takes the users input and comes out with a related post that matches closely with the original input. I've tried using like statements but it seems that they are not sensitive enough.
Such as which i need typos to also be taken into consideration.
is there a library that could save me from all my pain and suffering?
Well, I suppose there are a few different ways to normalize the user input to produce desirable results (although I'm not sure to what extent libraries exist for them). One of the easiest ways to get related posts would be to compare the tags present on that post (granted your posts have tags). If you wanted to go another route, I would take the following steps: remove stop words from the subject, use some kind of stemmer on the remainder, and finally treat the remaining words as "tags" to compare with other posts. For the sake of efficiency, it would probably be a good idea to run these steps in a batch process on all of your current posts and store off the resulting "tags." As far as typos, I'm sure there are a multitude of spelling corrector libraries exist (I found this one after a few seconds with Google).