I have the following simple setup in Azure ML.
Basically the Reader is a SQL query to a DB which returns a vector called Pdelta, which is then passed to the R script for further processing and the results are then returned back to the web service. The DB query is simple (SELECT Pdelta FROM ...) and it works fine. I have set the DB query as a web service paramater as well.
Everything seems to work fine, but at the end when i publish it as a web service and test it, it somehow asks for an additional input parameter. The additional parameter gets called PDELTA.
I am wondering why is this happening, what is it that I am overlooking? I would like to make this web service ask for only one parameter - the SQL query (Delta Query) which would then deliver the Pdeltas.
Any ideas or suggestions would be grealty appreciated!
You can remove the web service input block and publish the web service without it. That way the Pdelta input will be passed in only from the Reader module.
Related
Problem Context
We have a set of excel reports which are generated from an excel input provided by the user and then fed into SAS for further transformation. SAS pulls data from Teradata database and then there is a lot of manipulation that happens with the input data & data pulled from Teradata. Finally, a dataset is generated which can either be sent to the client as a report, or be used for populating Tableau dashboard. Also the database is being migrated from Teradata to Google Cloud (Big Query EDW) as the Teradata pulls from SAS used to take almost 6-7 hours
Problem Statement
Now we need to automate this whole process, by creating front end for the user to upload the input files and from there on the process should trigger and in the end the user should receive the excel file or Tableau dashboard as an attachment in a mail.
Can you suggest what technologies should be used in the front end & middle tier to make this process feasible is least possible time with google cloud platform as the backend?
Can an R shiny front end be a solution given that we need to communicate with a Google Cloud backend ?
I have got suggestion from people that Django will be a good framework to accomplish this task. What are your views on this ?
I have a WCF web service that takes a start and end date as input, and returns a record set. What I'd like to do is setup an Informatica mapping that creates variables for the date from one week ago and today's date. These are used as input for the web service consumer or web service as a source (whichever will work), but I'm not sure how to go about this. I can't create an Expression with no inputs, and I don't see how to set a mapped parameter as input.
The only two ways I can think about doing this would be to either build an app that creates a flat file with both dates, or to build a database object that supplies the dates as a source. I'd rather not have a separate outside source to provide these values, but I can't think of another way.
If you need those variables set before mapping run, use Assignment Task in the workflow and usePre-session variable assignment` to set the values for the mapping before it runs.
There is no way to do this with the Informatica v9.6.1. A source has to be created in order to feed the web service. I ended up creating a dummy record with 1 field, using it as input, then disregarding the input and setting up variable output using an Expression transformation.
I am writing some business flow using WSO2 Process Server. I want to start a process when there is any new entry in a specific table. i.e. we do DB Polling in oracle process.
So whenever there is a new entry in table, i need to fetch those data and start processing on those data using BPEL.
The above explained scenario can be achieved using task component of WSO2 ESB. Using task component we can schedule the call to a proxy service after certain interval and we can use that proxy to fetch the data from database.
I have a cronjob that runs every hours and parse 150,000+ records. Each record is summarized individually in a MySQL tables. I use two web services to retrieve the user information.
User demographic (ip, country, city etc.)
Phone information (if landline or cell phone and if cell phone what is the carrier)
Every time I get 1 record I check if I have information and if not I call these web services. After tracing my code I found out both of these calls takes 2 to 4 seconds and it makes my cronjob very slow and I can't compile statistics on time.
Is there a way to make these web service faster?
Thanks
simple:
get the data locally and use mellissa data:
for ip: http://w10.melissadata.com/dqt/websmart/ip-locator.htm
for phone: http://www.melissadata.com/fonedata.html
you can also cache them using memcache or APC which will make it faster since he does not have to request the data from the api or database.
A couple of ideas... if the same users are returning, caching the data in another table would be very helpful... you would only look it up once and have it for returning users. Upon re-reading the question it looks like you are doing that.
Another option would be to spawn new threads when you need to do the look-ups. This could be a new thread for each request, or if this is not feasible you could have n service threads ready to do the look-ups and update the results.
Just started using ColdFusion Builder 2 with ColdFusion 9 and saw the RDS viewer ability in it. I thumbed through Forta's WACK book and tried a simple example from it, a basic INSERT using a Derby Embedded database:
INSERT INTO Directors(FirstName,LastName)
VALUES('Ben','Forta')
If you execute that query using the RDS Query Viewer you get an error:
Statement.executeQuery() cannot be used with a query that returns a row count.
Are INSERTs, DELETEs, and UPDATEs not allowed using this tool? I'm probably just spoiled using SQL Server's Management Studio which will let you do anything if you have the rights.
Thanks!
Ya, AFAIK INSERTs, DELETEs, and UPDATEs not allowed