Variables null after useraction node

 42 Replies
 4 Subscribed to this topic
 52 Subscribed to this forum
Sort:
Page 2 of 2 << < 12
Author
Messages
Tim Cochrane
Veteran Member
Posts: 154
Veteran Member
    you DO realize...or maybe you don't...that js finctions can be stored in either pflow.js (Lawson delivered functions) or in pflow.user.js (customer's functions). The only difference being is that pflow.js is overlaid when a new designer version is installed, whereas pflow.user.js is never touched during an upgrade. Store your js script in one of these files and call them from any flow you're running...one repository for all you js code.

    NOTE - there is a pflow.js file on the server that's running ProcessFlows, and another version on your C:\ in your IPA Designer install. IF YOU ARE RUNNING A FLOW IN DESIGNER - the js code your testing must exist in your pflow.js on your C:\ drive; if testing on the server, the server version of pflow needs to have the js code.
    Tim Cochrane - Principal LM/IPA Consultant
    mikeP
    Veteran Member
    Posts: 151
    Veteran Member
      The reason I'm testing the ability to get functions into Start variables is that we're hosted by Infor, and work like normal folk only at the pleasure of AMS.

      We do use a pflow_user.js for Process Flow. By first putting it in a shared folder on the LSF server, I used LID to copy it to where it needs to go for PF, d:\lawprod\gen\bpm, IIRC.

      Unfortunately, IPA runs on a Landmark server, and while we will probably be successful in getting AMS to give us access to a shared folder on that server, they say LID can't be used to connect to Landmark, so I would have no way to copy the .js file to where it needs to go: D:\lmrk\system\LPS.

      Anyone know if that's true?
      Woozy
      Veteran Member
      Posts: 709
      Veteran Member
        You are correct that LID can't be used to connect to Landmark, but all you need is a different tool. A file share should work, or an ssh telnet or sftp client. If you are using Landmark, you should have the ability to get to a Landmark command line. To be honest, I avoid LID like the plague - even for S3 tasks - except when I absolutely have to use it (i.e. pgmdef, jobdef, etc). Otherwise I use PuTTY (ssh/telnet client) and WinSCP (sftp client) for both LSF and Landmark tasks.

        On the other hand, if you are managed by AMS you should be able to just ask them to upload your file to the correct folder on the Landmark server. They do that for us all the time. You just have to make sure you coordinate the pflow_user.js promotion with your flow promotion(s).

        I'm not sure if that helps, but I hope it does. Good Luck!
        Kelly Meade
        J. R. Simplot Company
        Boise, ID
        mikeP
        Veteran Member
        Posts: 151
        Veteran Member
          Woozy,
          I use putty occasionally to connect to one of our local servers. I've read here at Lawsonguru that it can connect to a LSF server, but when I tried it, I could never make a connection.

          Using the default ports, if I try ssh, I get a "connection refused" error, using raw or telnet, putty just terminates. I tried our LID port, which does connect with a raw connection, but I never get a login prompt.

          Can you tell me the protocol and port number you use?
          Woozy
          Veteran Member
          Posts: 709
          Veteran Member
            Hi MikeP - in our world, we use port 22 - but I'm not sure if that's standard or not. All this was set up by our Infrastructure team and they just told us what to use. AMS should be able to tell you, if you keep pressing until you get an answer (or ask your account rep to get you the answer). Note that we are on AIX (unix) rather than Windows so ssh users also have to be in the proper groups (lawson and the ssh_users groups).

            FYI - We use a lot of file shares to drop files onto the Infor boxes, and we can force the user/group to "lawson/lawson". I'm not sure if this is possible on the Windows side.
            Kelly Meade
            J. R. Simplot Company
            Boise, ID
            GeoffTSJY
            Basic Member
            Posts: 16
            Basic Member
              @Tim Cochrane
              I do know that you can store functions to be a part of the standard/custom library for all processflows to use. Far too often, the function in question is a custom function made for really just the current flow that I am working on. I would not like them to be accessible to other flows as they would have no use and could only ever cause harm/confusion if used or seen as available to use.

              This is actually another area in which I feel that Lawson/Infor have completely missed the mark in that they do not implement the idea of an on-page reference nor do they provide a cohesive system for defining local functions. You end up having to do it in this indirect way using XML types(and other workarounds) and they don't even share that with you in the documentation. They do implement the concept of an off-page reference, but for many things you do not wish to make an entirely new flow to handle a small routine that you wish to call several times by only 1 particular flow. I'm not saying that because these concepts are absent, you cannot implement correct functionality but it does make it more difficult to implement under the DRY design paradigm in any sort of elegant fashion.
              mikeP
              Veteran Member
              Posts: 151
              Veteran Member
                Woozy, sounds like you must be hosted by AMS, at least in part. If your Landmark server is hosted, how do you deal with changing your named user account password on the Landmark server?

                For LSF, AMS set up a password change script for us, which we access as a portal page, but they are pushing back (as usual) when we asked for a similar script for Landmark, maybe because there's no portal on Landmark that could be used to provide access to such a script. Their current position is that we should send them our desired passwords in email (there are only three of us who need this access) but that's unacceptable to our auditors, so we're in a holding pattern until we get someone with enough leverage to convince them.

                Knowing how it's done at other sites might help.

                Thanks.
                Peter O
                Veteran Member
                Posts: 69
                Veteran Member

                  Hi MikeP,

                   

                  What's your version?

                  Are you bound to Active Directory? Or what authentication system do you use (just the LDAP on Lmrk/LSF? Kerberos?)

                  Do you have a local Admin_ST access? You can change user passwords yourself as long as you have access to their Landmark Identity.

                  Your response to these questions could change our answers

                  We're hosted & We're LDAP bound to AD from Landmark using ssopv2 -> We use a IPA flow to upload our users, but password changes are managed on our Active Directory-side. A simple perl script can update your AD passwords and be run from a flow if you wanted to do it that way. It might all depend on your organizational setup as well...

                   

                  Woozy
                  Veteran Member
                  Posts: 709
                  Veteran Member
                    mikeP - we are on-premise for both LSF and Landmark, but we are managed by AMS for LSF and Landmark. So, we own and manage the boxes, and AMS manages the applications. Landmark is our authentication source for both systems, but in PROD we are bound to our ActiveDirectory and in non-PROD we are not bound. Because we are bound to AD in PROD, all password changes happen there. We aren't bound in DEV because we need to be able to create and manage test users outside of Active Directory so we create test actors in Landmark for this purpose.

                    On the Landmark side, there really isn't a way to script a password change. This is a glaring problem, but since we only have a couple of dozen users who access our non-PROD systems it really isn't a significant issue for us. In our case, different users handle password changes in different ways. I like my passwords to match across environments, so I change them every 90 days (as required by our AD policy). I have to work directly with our security team to change my passwords in all Landmark environments - basically, I wander of to their desk (or use our internal collaboration tool to do it remotely) and they have to open my Actor record all three environments, and select the "Reset Password" action, and then I type in my new password. I do this in PROD too, even though the password is not used there - because then the password is cloned down to the other environments when we do a data refresh.

                    It's pretty ugly, but it works for us. Doing the password change process is tedious, but it isn't hard.

                    I hope this helps.

                    Kelly Meade
                    J. R. Simplot Company
                    Boise, ID
                    Tim Cochrane
                    Veteran Member
                    Posts: 154
                    Veteran Member
                      @GeoffTSJY - I just wanted to make sure people knew about pflow.js, so good that you're aware of it.
                      I've built many custom/single use js functions and add them to pflow.js with no problems. Most clients I know, and clients i've serviced, only have 1-2 PFI/IPA developers...no functional trypes...so the other developer understands that function "findPayGrade" i defined for flow X isn't going to help them in a flow Y.

                      I would think that pflow.js COMPLETLY supports the DRY design principle...define/maintain it one time in pflow.js, call it as many times in as many flows as you want.
                      Also - not sure what your "but for many things you do not wish to make an entirely new flow to handle a small routine that you wish to call several times by only 1 particular flow" comment really mean...maybe you could elaborate.

                      I seriously doubt Infor is going to change how they store procedures, especailly since it's been working for many, many years, AND all of us that build flows don't have any problems with how it's currently done.
                      Tim Cochrane - Principal LM/IPA Consultant
                      mikeP
                      Veteran Member
                      Posts: 151
                      Veteran Member
                        Woozy,

                        I hadn't heard of AMS managing on-site servers before. Other than cost, what benefits does that have over full hosting? Do local staff manage your DB and LBI servers?
                        Woozy
                        Veteran Member
                        Posts: 709
                        Veteran Member
                          We may be a little unique. We were very early on the Landmark/LTM bandwagon, so we pushed a little (or a lot) to get what we wanted. We are privately held company, and we have been pretty uncomfortable with giving handing the kingdom over into to someone else's ownership. We're much more comfortable just granting them access to our system to do maintenance.

                          We also have a very strong and experienced Infrastructure team that has a very good service level, so if something goes wrong we can holler over the wall and say, "Hey {Bob}, can you see anything funny on xyz server". They can check our infrastructure and SAN to be sure it looks OK before we open a ticket in the Infor system for AMS to look at when they have a chance, sometime in the next few hours.

                          This also makes it much easier for us to manage file shares, database connections, control job scheduling (since we use an outside scheduler for Pflows), etc. We also do lots at the server level for granting access to shares and files via unix groups and scripting certain tasks (like fixing file access issues).

                          Finally, one of our biggest challenges with hosted solutions is performance. We have many very rural locations, and many of these locations have very poor connectivity. By having these systems on-premise, we have much more ability to control how the traffic is managed and prioritized and we have the ability to monitor it for problems. For example, almost nobody installs the Rich Client locally on their machines - we serve it up over Citrix. We also control external access via our NetScalers (so only non-HR types can access the systems directly from outside the firewall without going through Citrix.)

                          There is always a strong push to "the cloud" because it's the in-thing, and we're getting hints about that here too. Unfortunately, it makes system management much more difficult. We'll fight it tooth and nail, but we may not win that battle.
                          Kelly Meade
                          J. R. Simplot Company
                          Boise, ID
                          GeoffTSJY
                          Basic Member
                          Posts: 16
                          Basic Member
                            @Tim Cochrane 

                             

                            I'm not saying you're wrong at all or for giving a further hint but to clarify for you:


                            I never made the argument that use of pflow.js is not consistent with DRY programming. Allowing one to store a function in the start node would also be DRY. Which is possible but not in a clear way, which is why someone had to ask the question rather than reading it in the help file - where it should be. You shouldn't have to ask people for workarounds for basic programming concepts in a BPM IDE. So @mikeP was right to have to ask this. Lawson was wrong for not giving documentation so that he didn't have to ask or an intuitive, self-evident solution. But in reference to DRY: I was speaking about the process modeling concept of an on-page reference or subprocess. You can trigger another flow in processflow but that would be the modeling concept of an off-page reference. There are several differences but the easiest to see is scoping.

                            There are nice things about visual programming. But it shouldn't come at the cost of ability for what basic programming allows. I can easily represent a subprocess using nodes in processflow. But I cannot call this group of nodes like I would a function. Why not? So if I have the same concept that needs to be repeated at multiple points in my flow(in a way that a loop wont facilitate), I need to repeat this sequence of nodes/logic. If that concept ever changes, I need to change it in more than one location. That is not DRY. You could say: well then make a function and just call it from multiple places. Well, that's fine for just logical data processing but if the subprocess needs to interact with networked resources like a SQL node or a Landmark node or a system command or file access, then I need to import javascript libraries to do these calls in my function. And if I'm importing javascript libraries to connect to these systems, why the heck am I using a visual programming tool with nodes meant for easy interaction of these resources in the first place? Why not just do the whole thing in python? You shouldn't lose the concept of procedural programming by using a BPM tool. Other BPM tools allow for these concepts.

                            The only way(as far as I know) to get this to happen really is to set a flag (or series of flags) to indicate the calling point of the subprocess. Then go to the subprocess start, then have a branch at the end of the subprocess. The branch checks the flag. The flag just indicates where the process was called from so that it knows where to return to. Like so:




                            Think of the bottom row as the subprocess and the top row as the main process. You could do this with multiple subprocesses as well but you'd have to be on top of your flagging and it could look messy. I know that in this picture you could just loop through it 3 times. I'm just showing how you can still get this to happen in the current system, not showing an example of where it would be useful. If you understand why functions are useful, you should understand why this would also be useful and DRY.

                             

                            I should be able to write it like this instead:

                            Where the CallSubProc is a special node type that calls the subprocess (the bottom row in the first pic) and returns to where it was called from when it's done. No manual flagging or branching. No repeating my code all over the place. This is nothing revolutionary -- the same thing as a function. Basically, I should not only be able to make functions in javascript, I should be able to make "functions" out of the nodes themselves without screwing with scoping or persistence issues or having to construct a web of branches. 
                            mikeP
                            Veteran Member
                            Posts: 151
                            Veteran Member
                              Thanks Woozy, I appreciate the details.
                              Tim Cochrane
                              Veteran Member
                              Posts: 154
                              Veteran Member
                                i think the basic issue here is you think IPA "should" let you do X, because you can do that in other development tools, which in theory sounds good...However the reality is the IPA tool only has certain functionality that, over time, some/many of us have learned how to work with or have created work around solutions.

                                I would suggest that you submit an enhancement request with Infor...otherwise, WYSIWYG.
                                Good Luck.
                                Tim Cochrane - Principal LM/IPA Consultant
                                GeoffTSJY
                                Basic Member
                                Posts: 16
                                Basic Member
                                  @Tim Cochrane


                                  Respectfully, It's not that I don't believe that they should because other tools allow this. I think they should because it's consistent with basic computer science principles. It shows a lack of understanding/care on their part to implement these concepts so incompletely. Just like the fact that they don't even use fixed width fonts shows that they don't get it. I was pointing out these issues to coworkers well before I even looked at other BPM tools because of the inconsistency with the theory behind it. I didn't mean to make this a discussion on how terrible the product is. I mentioned it because the only reason this conversation was needed is because they didn't do things right. A conversation on what the product is missing and does wrong would take up entire forums.


                                  But I do know that you can get things done effectively in processflow thanks to the participation of people in the community coming up with solutions. And I have modified my designs to cope. But far easier than submitting requests for changes is to supplement this with open source, free tools that do it right. Shoot, these free tools even have Object Relational Mapping; Business Rules Management Engines; custom Class data types; Object persistence; responsive, frontend, WYSIWYG editors; on-screen notation and more...

                                  Peter O
                                  Veteran Member
                                  Posts: 69
                                  Veteran Member
                                    "[...]is to supplement this with open source, free tools that do it right[...]"

                                    *one hundred developers just began hissing and melting... "Open Source... hissssssssssss"*

                                    The problem with that is that it's more expensive to do it right ( http://www.jwz.org/doc/worse-is-better.html ), and then you can't use your customers as the QA team for even more savings! But hey, as Tim seemed to be expressing, this forum is definitely a solutions forum, not a gripes one
                                    When the wiki gets started up, perhaps we can start a gripes and pain points section for developers to review! I like the idea of a gripes/pain points area, because when concerns go unxpressed, they go unsolved.

                                    kflores01
                                    Veteran Member
                                    Posts: 43
                                    Veteran Member

                                      Hope my own trials with IPA help.  We found the conversion tool did not fully convert the .xml to .lpd properly.  To ensure proper configuration, you had to purposely select MAIN for the configuration, on those nodes which had it.  Other times, it was import to use (or re-use) the BUILD command on the various queries to have a node proper recognize the syntax.  In a few cases, sadly our most complicated flows, we simply rebuilt it from scratch.

                                      To make matters worse, as we are in the midst of a 10x upgrade, we discovered that the IPA designer version must match your Rich Client.  Otherwise, the mismatch causes unknown issues, at times, with flows.  In addition, we had to put our 10x IPA designer on a separate computer from out 9.x Process Flow designer, as they comflicted with one another.

                                       

                                      Page 2 of 2 << < 12