Overcoming Speedbumps in Power Automate

On Friday, I had a text processing problem that I chose to solve in Power Automate. Here’s a summary of the problem, the solution, and the challenges encountered along the way. The take-away though is that a citizen developer without a programming background would have struggled to complete the task. This is why I see the model of use where you pair a business person who has a business problem with a Power Platform developer as a much more productive scenario than a model where the citizen developers are expected to be self sufficient. That’s why the organizational structure of a Power Platform Center Of Excellence (COE) is so valuable. Don’t confuse that with the Power Platform COE toolkit – that’s a set of tools to help with Power Platform governance. The PP COE as an organizational structure is a team of Power Platform experts that can use Power Platform to help business users solve their problems. Think of it as an automation solutions team.

But I am getting ahead of myself. I’ll leave defining a PP COE team and how such a team can work for another time. What I want to discuss today are the friction points encountered when building a seemingly trivial Flow.

The problem: I have a report that is a data grid displayed on a secured web page, and that data grid does not allow for sorting, filtering, or exporting. So the data is difficult to analyze. The only way I’ve found so far to get the data off of that web page is to copy and paste the rows of data into a file. Each row in the report has 10 columns. However, when I paste the data into either an Excel file or a text file, all the columns become rows. So where the original report has 8 rows of 10 columns, the paste result is one column of 80 rows – again impossible to analyze.

The solution: Convert the 80 line text file into a CSV file. That’s a “Comma Separated Values” file that has each row in the format:

"value1","value2","value3"

You put a double-quote (“) around each value in case there are any commas in a value, which in my case, there were. Fortunately, there were no quotes in the values.

Here are the 7 issues encountered:

  1. Variable naming. My solution involved iterating through each line of the input. Naming my variables so I could keep straight was was an input row versus an output row was key. Renaming a variable did not cascade to each of the uses of that variable, like it does in Visual Studio. If I rename a variable, I have to find each use of that variable and update it.
  2. Incrementing variables. I keep a count of how many lines I have read in so far, so that I know when I get to 10 I need to start a new output row and start counting at 1 again.
    1. I started my counters at 1, not 0. That’s a design decision that I recognize as a programmer, but is not something a citizen developer would think about. Programmers on the other hand recognize that as a language fundamental and a source of strong opinions about the “One True Way” to count your objects….
    2. Regardless of which way you number the beginning of your counting, programmers recognize “off by one” errors as a common source of bugs. Sure enough, when I was counting rows because I needed to skip putting the fourth input row in my output columns, I was off by one and had to fix it. It’s a matter of checking the row count before or after you increment it, and that difference in sequencing affecting whether your check of the counter should check for 3 vs 4, for example.
      1. Does this paragraph even make sense to a non-programmer?
    3. Once I got my logic right, I was surprised to discover that it looked like my counters were not incrementing. I thought I wasn’t providing the right syntax when using the Increment Value action of the Variable object. But somewhere along the way, the Power Automate designer told me something about setting my parallelization settings to 1. Threading is NOT something a non-programmer would or should need to understand for most workflow solutions. The warning about parallelization popped up next to the value incrementing action, but the parallelization settings are actually on the trigger object at the top of the workflow.
  3. Importing rows from a text file needs to produce an array, not strings. Ok, my web search skills lead me to an example of how to fix this. I’m doubtful a non-programmer would have been able to find the answer as easily or at all.
  4. Concatenation. If I am writing an expression to concatenate two strings, can I use “+” or “&” or just put values next to each other? I ended up using the concatenate() function.
  5. Newline. I needed to put a newline in the output string at the end of each row. Was it “\r”,”\n”, or something else? Again, a web search lead me to creating a string variable and pressing the Enter key while in the textbox for specifying its value. Yes, the textbox became two lines high at that point, but it is a very unintuitive way to capture an invisible character. As a programmer, I wanted a way to create a Constant rather than a Variable, but just ended up using a Variable.
  6. Quoting. Rather than figuring out a textual way to “quote” (or “escape”) a quote character (“), I ended up setting up another string variable that just had the double-quote character in it, so I could use the variable in my concatenate() expressions.
  7. Commenting, Commenting out, and debugging.
    1. Giving each block a meaningful name was very important to being able to read the flow. When a block contained nested blocks, I tried to give the outer block a description of what the whole aggregate block was, not just the first (usually conditional) action was.
    2. I have a couple of conditions in my code where the test statement is “1 = 1” (which would always be true) or “1=2” (which would always be false). These are placeholders for other tests that I used earlier in developing the flow. For example, when I started writing the flow, I wanted it to only write out a few rows rather than have it process the whole file, so the test was “If rowcount < 10”. Once I got the process working, I wanted to process the whole file, but didn’t want to remove the If statement (a Conditional in Power Automate nomenclature), so I changed the test to “If 1=1”.
    3. There’s a programming concept which in some languages is called “debugging by printf”. PrintF was an output function. The idea was that if it was hard or impossible to inspect the code while it was running, you could have the code write out values using the PrintF statement. To me, Power Automate is in the middle in terms of ease of inspecting runtime values. On one hand, the history view does a pretty good job of showing values at each step. On the other hand, you can’t step through each step as it is executing. So I used a combination of looking at the history view and using the email action to let me see what certain values were at certain times.

Originally, I envisioned writing a PowerShell script to read the input file and create the output file. I probably could have written the PowerShell script in less time that it took me to write the Flow. But the Power Automate Flow will be easier for someone else – who is not a programmer – to use. The Flow is triggered when a colleague drops the input file in a certain folder in SharePoint, and a few minutes later, the corresponding output file appears in another folder in the same SharePoint library. The user also gets an email when the file is ready.

In conclusion, even though I was able to make the flow in less than 8 hours, it still took a programmer mindset to do so. That’s what I’ve seen time and time again when I write Flows in Power Automate. Thus why I think the idea of citizen developers being able to create flows entirely on their own is misguided and the model of pairing them with a Power Automate developer is a much more productive one.

Leave a Reply

Your email address will not be published. Required fields are marked *

*