SQL versus standard Lawson queries

 27 Replies
 8 Subscribed to this topic
 52 Subscribed to this forum
Sort:
Page 1 of 212 > >>
Author
Messages
Terry P
Veteran Member
Posts: 234
Veteran Member

    A few of us at work today had a "philosophical discussion" on the merits of the "best practice" for writing Process Flows (IPA or PFI). One developer is very skilled in SQL, so whether it is writing a Crystal Report or a Process Flow - he uses SQL over other methods. My take (with limited SQL knowledge) is while SQL might be more efficient, the advantage of Process Flow is you can visually figure out what is going on, and is easier to read and understand. Three or four nested queries is better than one single SQL command.

    What is the opinion of others here. When do you use SQL. Would you use SQL even though it could be done in other activities?

    Joan Herzfeldt
    Veteran Member
    Posts: 74
    Veteran Member
      I know SQL so can create a query faster in SQL. I am definitely more comfortable with SQL then the 'Add-ins' like coding in the Lawson Query Nodes. However, IPA is designed to work with the Lawson database. Our policy is always use Lawson nodes when dealing with Lawson data. The SQL query/trans nodes are to be used with other systems, not Lawson. The same goes for the Command Node, only use the command node if there are no other nodes that will accomplish the same outcome. Even our consultant, who came to work on a difficult flow with me, said the same thing. "IPA is designed to work with Lawson so you will get the most out of it by using those tools, and the command node was added as a last resort kinda of thing."
      Personally It might take me longer to create the Lawson Qry/Tran nodes, but that's the point of learning a new system. How can I do this better...not how can I do the same thing with a different tool.
      IMHO
      Joan
      Kwane McNeal
      Veteran Member
      Posts: 479
      Veteran Member
        I'd guess it would depend on the volume of data needing to be processed in some way.

        1) The approaches aren't mutually exclusive. I have seen some pretty amazing PFlow consultants do extremely difficult tasks elegantly by combining the methods.
        2) If there is an extremely large recordset and/or complex record structure to be processed from the database, then SQL is the way to go. In some extreme cases, it could be the ONLY way to go.
        3) If the data isn't in a database, then, obviously, PF?/IPA are the ONLY way to go.

        Overall, I think the drivers are complexity and/or dataset size.
        FireGeek21
        Veteran Member
        Posts: 84
        Veteran Member
          Process Flow is more than just receiving and writing data to Lawson. Some of our flows are around 25 or more nodes. 1 SQL is not possible, I don't care how big you write it. Also, SQL can't send email notifications and some of the other things Process Flow can do!

          SQL can be faster to retrieve data versus the query nodes. So, SQL does have some up side. As far as Crystal, I use Crystal as much as I can for reporting. I go the simple route as much as possible for the benefit of other co-workers who may need to access the report. More complicated reports I have written involve SQL Commands and SQL Expressions - multiple outer joins where speed of processing is important. We also use SQL to examine data - it is a quick dump.

          Thought: Why is he always using SQL? It's comfortable? Or, because he knows no one else can do what he does - making him feel like he can't be fired? My practice is to try to keep things simple and document what I do (inside and outside the flow/report). We are one day away from not being here and someone else will have to replace us.
          Kwane McNeal
          Veteran Member
          Posts: 479
          Veteran Member
            To follow up with what Joan stated, the command node is essential when pushing processed Lawson data to an external system that doesn't use, publically expose or support a RDBMS, such as creating accounts in a DS (such as ADS or NDS).

            In those cases, you're best using Lawson-specific nodes in PF?/IPA to process the data, then push with a command node. There's no SQL that can do that for you.
            Woozy
            Veteran Member
            Posts: 709
            Veteran Member
              Much depends on how many business classes/tables I'm needing information from, and how much filtering or sorting I need to do.

              If I am wanting to use a delivered action (Find, Update, etc) and I have a good key set to use, then the delivered nodes work OK. This is also true if I need to do AsOfDated queries.

              If I'm doing a batch process of some sort that requires that I pull related data from several tables (Employee, EmployeeAddress, EmployeeContact, WorkAssignment, HROrganizationUnit, Position, Job) then I will do it as a SQL query because it is much easier to debug (cut the call from the log and run in in my SQL query tool) and the performance is much better. However, if I'm doing an action request flow and want to see exactly what is returned from each node, then I'll use Landmark or S3 Queries.

              If I need to do any sort of non-key filtering, I usually avoid the Landmark node just because the filtering functionality is incredibly painful (spelled S-U-C-K-S) and tends to break as soon as you open it in the builder. Also, if I want specific sequencing, SQL is the only way to go. S3 Queries are a little better for this, but still aren't in the same class as SQL.

              Vanilla queries based on key field data work fine with the Infor/Lawson query nodes. For anything else, SQL is the way to go. The big exception to this are encrypted tables, like EmployeeIdentificationNumber. This is not queryable (in any useful way) via SQL.

              I hope this is helpful.

              Kelly
              Kelly Meade
              J. R. Simplot Company
              Boise, ID
              Woozy
              Veteran Member
              Posts: 709
              Veteran Member
                By the way, regarding FireGeek's comment, I was referring to using SQL nodes within IPA/PFI, which I think is what Terry P is referencing. However, we have moved almost all of our "batch" processes off IPA/IPA and into Microsoft SSIS packages, because of performance and other reasons. However, we are a very heavy MSSQL shop, and we have the infrastructure and DBA expertise required for this. Basically we query the raw data from our Infor systems into temp SQL tables and then use SQL stored procedures to munge the data together and format.
                Kelly Meade
                J. R. Simplot Company
                Boise, ID
                Tim Cochrane
                Veteran Member
                Posts: 154
                Veteran Member
                  i agree with Kelly - in any ProcessFlow, i'll use a mixture of sql queries and DMEs...whichever gives me the best efficiency.
                  Tim Cochrane - Principal LM/IPA Consultant
                  TBonney
                  Veteran Member
                  Posts: 281
                  Veteran Member
                    I agree with Tom & Kelly as well. When building a flow, you use whatever makes sense to you at the time. The bottom line is that each developer is going to have a different way of doing things...and a different rationale for doing it that way.

                    In response to Joan's comment, from my perspective, if the node exists (SQL), there is no harm in using it if/when you're more comfortable with that than the Infor query/trans nodes. You're right, there is no other argument for the command node. However, as far a s the SQL node, regardless of whether its intended purpose was communicating with "other" application databases or not, as long as it's part of the IPA toolset, why not use it when that's what your most comfortable with. I agree that taking the time to properly learn all the ins and outs of the application and its capabilities is ideal. However, unfortunately, that's not the world we live in. Oftentimes, when we need to build a flow, we're pressed for time (I know, when aren't we?) and therefore we may need to just build the flow using what we're most comfortable with, even if it may not exactly be the "best" approach.
                    David Williams
                    Veteran Member
                    Posts: 1127
                    Veteran Member
                      When the SQL node first came out I thought it was going to be useful for pulling data from external systems (and I did) but, like others, thought the DME was for Lawson. I have since realized that sometimes using SQL against the Lawson tables (or Landmark tables) is better. Lawson has prebuilt indexes and relationships that do make using DMEs very effective and will most often do my flows that way. As a consultant, however, the best answer for me will more than likely come down to what can the client support when I'm gone.
                      David Williams
                      Joan Herzfeldt
                      Veteran Member
                      Posts: 74
                      Veteran Member

                        Ok, point taken ;-)

                        I should mentions that IPA is a new tool for us, so I'm still in the 'ideal world'.  I'm sure once I've created a lot more flows my opinions will change and without a doubt Tim efficiency is very important.  I'm the lead person for IPA so I'm trying to come up with some basic standards to help the other developers who will create flows but it is not their primary task.  This is difficult do to when my experience is limited.  I loved the discussion back and forth on this topic I've learn a lot.

                        thanks -Joan

                        Joan Herzfeldt
                        Veteran Member
                        Posts: 74
                        Veteran Member

                          As a side note:  I like the 'Points,  member status,  and number of posts listed next to our avatars.  It's a good indicator that I'm new and to take my opinion/comments with that in mind.  

                          BTW - I am the type of person who's not too worried about looking stupid, because 9 times out of 10 my silly question helped someone else.

                          Ethan
                          Advanced Member
                          Posts: 28
                          Advanced Member
                            Kelly, you mentioned
                            we have moved almost all of our "batch" processes off IPA/IPA and into Microsoft SSIS packages, because of performance and other reasons
                            . Out of curiousity, what were the other reasons? We are having all kinds of performance issues in LPA on process flows that ran efficiently in v9 and are not having much luck getting any better performance out of our LPA servers.
                            Woozy
                            Veteran Member
                            Posts: 709
                            Veteran Member
                              Hi Ethan

                              Although performance is the key driver, there are a few other reasons.
                              - We also have existing SQL infrastructure and DBA expertise and existing logging, file transfer, and monitoring standards.
                              - We have an IT Operations group that provides 24x7 monitoring, and on-call DBAs to address issues. Although they monitor IPA as well, IPA issues tend to be (frequent) application problems and SQL issues tend to be (relatively rare) data issues.
                              - Troubleshooting is somewhat easier. In order to troubleshoot a flow, you really need excellent node error logging and logging has to be turned on for the flow. I'm sure you've noticed that having any logging turned on for a flow drastically impacts performance, and having workunit logging (otherwise known as "useful" logging) makes it significantly worse. We don't have a huge organization (only 7,000 employees), but trying to run a batch extract out of PFI without any logging takes a couple of hours, but with logging will take many hours or days. Using SSIS/SQL the same extract runs in seconds, and it's all SQL so we can easily identify what failed and then duplicate the issue in a SQL client. It is also much easier to monitor for and mitigate specific issues within the SQL stored procedure as they are identified.

                              As a side note, if you are having performance issues on flows, the very first thing I would do is make sure your logging is off ("no logging") for those flows. In Production, the only flows we log are the ActionRequest flows, for when someone asks us why someone was or wasn't assigned as an approver.

                              As I've said before, PFI is great for transactional and single-employee reporting, but it just doesn't cut the mustard for batch tasks - at least when other options are available.
                              Kelly Meade
                              J. R. Simplot Company
                              Boise, ID
                              Kat V
                              Veteran Member
                              Posts: 1020
                              Veteran Member

                                I'm the procurement coordinator - so this is out of my comfort zone - but I use add-ins and sql both then have IT create process flows.  Very generally speaking, from this end user experience, I, the user, am being asked something and I then figure out what to do with it and how manually.  Once I've done it manually a couple of times I can figure out the process flow for IT to build.

                                At that point, why reinvent the wheel?  If I used a sql to get my info, IT would use my sql and then build actions from there.  If I used an add-in to find it, they use my add-in.

                                For instance, we have an add-in that finds Bill Onlys to receive in. We wanted to automate that process and have process flow receive them. But in the conversation, we started talking about the limitations of our current add-in and started talking about "well if we're going to look for this, might as well look for that too"

                                SQL is just easier at that point for us because you can get to the tables faster than trying to figure out Lawson's index and trying to figure out which table you need to start with to get to the others. It's also more universally "understood" by my IT than the add-in dme. But that's the only "advantage" I've seen practically for using it over add-in.

                                Ethan
                                Advanced Member
                                Posts: 28
                                Advanced Member

                                  Hi Kelly,

                                  Thank you for your reply. We have been having issues specifically with our batch jobs. For example, we have a flow that loads time records into Payroll, roughly 16k-17k rows of data. In v9 this process would typically take around 45 minutes. In v10 the process takes 6 days without logging turned on. We have tried rewriting the flow in 10 (rather than converting), no performance change. This exponential performance decrease has been common across all of our batch processes. Our server is running Windows Server 2012 and is fairly "beefy" with 8 CPUs and 16 gigs of RAM and when flows are running, the resources are barely getting taxed. We have worked with InforXtreme, but we haven't been able to resolve the issue. It is good to hear that other customers are having similar issues, not good to hear that you have had to look for solutions outside of LPA to resolve the issues. We are trying to figure out where to go from here.

                                  Ethan

                                  Woozy
                                  Veteran Member
                                  Posts: 709
                                  Veteran Member
                                    Hi Ethan - It's surprising to me that the same flow is performing that much differently. So you were using PFI (LSF9) and converted to IPA (LMRK10)? In my experience, PFI vs IPA performance is pretty consistent. Sounds like something is configured incorrectly, but I'm not sure what it would be.

                                    Our situation is different - we've converted almost everything to IPA over the past couple of yeas except for the very few (I think 1) S3 flow that is triggered from a form. Also, to be honest, when I create a flow against S3 I almost always use SQL queries to get the data - mostly because I understand the tables/relationships/indexes, but also because the S3 query builder didn't work correctly in the older IPA designers (though it does now). The few flows that we have that update S3 - mostly Personnel Actions - are generally built using web nodes rather than the S3 Transaction nodes. Maybe that's the difference.

                                    Sorry I can't help you more. It might be wise to open a new post on Lawson Guru specific to your performance issue. There are quite a few folks who have gone through the PFI to IPA conversion that may have insight.

                                    Good Luck.
                                    Kelly Meade
                                    J. R. Simplot Company
                                    Boise, ID
                                    David Williams
                                    Veteran Member
                                    Posts: 1127
                                    Veteran Member
                                      How many instances of IPA (LPA) do you have running in the Landmark (JAVA) Grid? Infor suggested to another client of mine to set up multiple instances to help the performance.
                                      David Williams
                                      Kyle Jorgensen
                                      Veteran Member
                                      Posts: 122
                                      Veteran Member

                                        Whoa!  6 days for 16K records!! 

                                         

                                        Does your flow do an AGS call to update PR36 (or maybe PR35) in a 'batch-like' manner?

                                        Or are you using IPA to automate running the PR530 (and it's the PR530 that is running exceptionally long)?

                                        Ethan
                                        Advanced Member
                                        Posts: 28
                                        Advanced Member
                                          Hi Kyle, we are taking our time system files, reformatting them for the PR530 and running the PR530.

                                          Hi David, we only have a single IPA instance running in Landmark. We will look at adding an additional instance.

                                          Kelly, I appreciate your advice. I will open another forum posting regarding our performance issues.
                                          John Henley
                                          Posts: 3353

                                            Ethan, can you give us an idea of how the flow is structured and what types of nodes you are using? For instance are you using file access, iterators, etc.? One pattern that I've noticed with the move to v10 IPA is that customers who used to have LSF / PF on the same server now have IPA on a separate server and when they use the FileAccess nodes to write from within an iterator loop that means a network round-trip for every write...

                                            Thanks for using the LawsonGuru.com forums!
                                            John
                                            BarbR
                                            Veteran Member
                                            Posts: 306
                                            Veteran Member
                                              John - can I ask a question related to what you said? We are AIX 9.01 going to AIX 10.0.? and have been told that IPA will still be on the application server. Is that not true?
                                              John Henley
                                              Posts: 3353
                                                IPA can be on the same server with LSF as long as IPA is the only Landmark application (i.e. you don't also have LTM, etc.) and you don't have heavy IPA requirements. However, even in that case, it is often put on a separate server.
                                                Thanks for using the LawsonGuru.com forums!
                                                John
                                                mikeP
                                                Veteran Member
                                                Posts: 151
                                                Veteran Member

                                                  As it happens, I'm working the data interators and file access nodes in IPA/IPD right now too. 

                                                  I created a quick little benchmark process with an iterator that reads a text file from our LSF server and writes it back to a different file on the LSF server one record at a time using a file access node in the interator loop.  Then a made a copy of the process and changed the file access node to write to a file on the same Landmark server where IPA runs.  This is the development Landmark server with almost nothing running on it except for the occasional test IPA process.

                                                  The file is a CSV with two values, employee number and a phone number.

                                                  Times to process a 500 record file:

                                                  Writing to LSF server, 3 mins, 50 secs.

                                                  Writing to Landmark server, 19 secs.

                                                  cymaano
                                                  Basic Member
                                                  Posts: 13
                                                  Basic Member

                                                    Hi Woozy,

                                                    My question is similar to SQL vs Landmark nodes.

                                                    I am trying to convert PFI flow to IPA version.

                                                    The current PFI is using SQL transaction node to insert work unit variables(LOGAN tables) and there are a lot of insert commands. Now, I would like to convert this to IPA version and I am sure the recommended step is to use Landmark node create action.

                                                     

                                                    is it possible to use SQL transaction to create PF related tables such as PFWORKUNITS,PfiWorkunitVariable instead of using Landmark node?

                                                     

                                                    ex. PFI version

                                                    INSERT INTO .dbo.WFVARIABLE (
                                                    WORKUNIT, VARIABLE_NAME, WF_SEQ_NBR,
                                                    VARIABLE_VALUE, VARIABLE_TYPE ) values (
                                                    , 'AccountName', 10, '', '3')

                                                     

                                                    Can we convert this to and use SQL transaction as well in IPA instead of using Landmark node (create action)?

                                                    INSERT INTO .PFIWORKUNITVARIABLE (
                                                    PFIWORKUNIT, PFIWORKUNITVARIABLE, SEQNBR, 
                                                    VARIABLEVALUE, VARIABLETYPE) values (
                                                    , 'Country', 0, '', 3)

                                                     

                                                    Hoping for your reply



                                                    Page 1 of 212 > >>