A few of us at work today had a "philosophical discussion" on the merits of the "best practice" for writing Process Flows (IPA or PFI). One developer is very skilled in SQL, so whether it is writing a Crystal Report or a Process Flow - he uses SQL over other methods. My take (with limited SQL knowledge) is while SQL might be more efficient, the advantage of Process Flow is you can visually figure out what is going on, and is easier to read and understand. Three or four nested queries is better than one single SQL command.
What is the opinion of others here. When do you use SQL. Would you use SQL even though it could be done in other activities?
Ok, point taken ;-)
I should mentions that IPA is a new tool for us, so I'm still in the 'ideal world'. I'm sure once I've created a lot more flows my opinions will change and without a doubt Tim efficiency is very important. I'm the lead person for IPA so I'm trying to come up with some basic standards to help the other developers who will create flows but it is not their primary task. This is difficult do to when my experience is limited. I loved the discussion back and forth on this topic I've learn a lot.
thanks -Joan
As a side note: I like the 'Points, member status, and number of posts listed next to our avatars. It's a good indicator that I'm new and to take my opinion/comments with that in mind.
BTW - I am the type of person who's not too worried about looking stupid, because 9 times out of 10 my silly question helped someone else.
we have moved almost all of our "batch" processes off IPA/IPA and into Microsoft SSIS packages, because of performance and other reasons
I'm the procurement coordinator - so this is out of my comfort zone - but I use add-ins and sql both then have IT create process flows. Very generally speaking, from this end user experience, I, the user, am being asked something and I then figure out what to do with it and how manually. Once I've done it manually a couple of times I can figure out the process flow for IT to build.
At that point, why reinvent the wheel? If I used a sql to get my info, IT would use my sql and then build actions from there. If I used an add-in to find it, they use my add-in.
For instance, we have an add-in that finds Bill Onlys to receive in. We wanted to automate that process and have process flow receive them. But in the conversation, we started talking about the limitations of our current add-in and started talking about "well if we're going to look for this, might as well look for that too"
SQL is just easier at that point for us because you can get to the tables faster than trying to figure out Lawson's index and trying to figure out which table you need to start with to get to the others. It's also more universally "understood" by my IT than the add-in dme. But that's the only "advantage" I've seen practically for using it over add-in.
Hi Kelly,
Thank you for your reply. We have been having issues specifically with our batch jobs. For example, we have a flow that loads time records into Payroll, roughly 16k-17k rows of data. In v9 this process would typically take around 45 minutes. In v10 the process takes 6 days without logging turned on. We have tried rewriting the flow in 10 (rather than converting), no performance change. This exponential performance decrease has been common across all of our batch processes. Our server is running Windows Server 2012 and is fairly "beefy" with 8 CPUs and 16 gigs of RAM and when flows are running, the resources are barely getting taxed. We have worked with InforXtreme, but we haven't been able to resolve the issue. It is good to hear that other customers are having similar issues, not good to hear that you have had to look for solutions outside of LPA to resolve the issues. We are trying to figure out where to go from here.
Ethan
Whoa! 6 days for 16K records!!
Does your flow do an AGS call to update PR36 (or maybe PR35) in a 'batch-like' manner?
Or are you using IPA to automate running the PR530 (and it's the PR530 that is running exceptionally long)?
Ethan, can you give us an idea of how the flow is structured and what types of nodes you are using? For instance are you using file access, iterators, etc.? One pattern that I've noticed with the move to v10 IPA is that customers who used to have LSF / PF on the same server now have IPA on a separate server and when they use the FileAccess nodes to write from within an iterator loop that means a network round-trip for every write...
As it happens, I'm working the data interators and file access nodes in IPA/IPD right now too.
I created a quick little benchmark process with an iterator that reads a text file from our LSF server and writes it back to a different file on the LSF server one record at a time using a file access node in the interator loop. Then a made a copy of the process and changed the file access node to write to a file on the same Landmark server where IPA runs. This is the development Landmark server with almost nothing running on it except for the occasional test IPA process.
The file is a CSV with two values, employee number and a phone number.
Times to process a 500 record file:
Writing to LSF server, 3 mins, 50 secs.
Writing to Landmark server, 19 secs.
Hi Woozy,
My question is similar to SQL vs Landmark nodes.
I am trying to convert PFI flow to IPA version.
The current PFI is using SQL transaction node to insert work unit variables(LOGAN tables) and there are a lot of insert commands. Now, I would like to convert this to IPA version and I am sure the recommended step is to use Landmark node create action.
is it possible to use SQL transaction to create PF related tables such as PFWORKUNITS,PfiWorkunitVariable instead of using Landmark node?
ex. PFI version
INSERT INTO .dbo.WFVARIABLE ( WORKUNIT, VARIABLE_NAME, WF_SEQ_NBR, VARIABLE_VALUE, VARIABLE_TYPE ) values ( , 'AccountName', 10, '', '3')
Can we convert this to and use SQL transaction as well in IPA instead of using Landmark node (create action)?
INSERT INTO .PFIWORKUNITVARIABLE ( PFIWORKUNIT, PFIWORKUNITVARIABLE, SEQNBR, VARIABLEVALUE, VARIABLETYPE) values ( , 'Country', 0, '', 3)
Hoping for your reply