Optional Instructions…!

April 8, 2019 Leave a comment

One great thing about the development of the IBM mainframe hardware of the last few years has been the addition of lots of nice new, shiny instructions especially those that allow for the use of all or part of the ‘grande’, I.E. 64 bit, registers. For example, to do a locked update on a double word (8 bytes) in the past you would need to use two pairs of registers, for example, zero and one and fourteen and fifteen and use a Compare Double and Swap (CDS) instruction to do the update. Oh, and those pairs of registers had to be even-odd  as well. Now you can just use two 64 bit registers and use a Compares and Swap Grande (CSG) instruction to do the update and the registers do not need to be an even-odd pair either.

Basically, code that used to take several instructions and registers can often be simplified into fewer instructions and use less registers by using the newer instructions from the instruction set. The benefit is less CPU time consumed and faster execution.

This is great and I’ve been doing this sort of thing for a good few years now. However, if you read the POP (Principles of Operation) you may notice next to some instructions under the ‘Program Exceptions’ heading something along the lines of:

  • Operation (if …… facility is not installed)

What this means is that this instruction is ‘optional! What that means is that it may not be physically implemented on the machine and what that means is that if you try to execute it, you will get an operation exception (S0C1 abend).

There are fields in the PSA you can test to see if a facility is installed before attempting to use an instruction or feature of the hardware but to be honest, in all my forty some years of coding, I have never had to do that…..Until now.

Well, actually, not even now which I will explain in a minute.

I wrote some code recently and wanted to do a divide. Because of the size of the numbers, I used the grande version of the divide instruction. Before doing the divide I needed to clear the high word of the registers so I used the SHHHR Subtract High instruction.

This worked great on my test machine but failed on another machine with an S0C1 abend. Yep, that other machine did not have the high-word feature installed.

It was easy enough to replace the instruction with another (The ICMH version of Insert Character under Mask) which is not feature based and so should be there on all hardware but this is the first time in all my years of coding that I’ve ever had to even consider to availability of a feature.

I’ve always thought of the instruction set in the machine as being like an engine in a car, every car has an engine, but I’ve never really considered that not all car engines are the same. If you want more power, it typically costs more to buy a car with a more powerful engine. A more powerful car will (speed limits aside) get you from A to B more quickly and if time is money, spending money up front to get a more powerful car will save you time (and money) later on. In the computer world, you can pay more up front to buy a machine with more features (better instruction set) and get faster execution which save CPU time and thus chargeable MSUs or you can pay less and pay the money when you consume the extra MSUs due to the smaller and thus slower instruction set available to code running on the machine.

In my case, rather than test to see if the feature (instruction) was available before using it I just replaced it with an instruction that I know will always be there. This reason for this is that this is very low use code and it was two instances of one instruction. It would take longer to execute the code to test for the feature and branch to alternate code than it would to just always just execute an alternative instruction. That test and branch also adds slightly more complexity to the code which adds to the possibility of errors and makes it just that bit harder to maintain.

Had the block of code been high use and/or very complex it may have been worth while to test for the feature and use the faster instructions/simpler code when available but that was not the case here. Simple is good in my book!


Categories: Uncategorized

A Matter of Style – Labels…

November 9, 2017 Leave a comment

As much as possible I try to make it obvious from a variable label, what type of variable it is and wherever possible, what other variable or control block it is related to.


I start variable names for constants with a # sign, thus:


If a variable starting with a # sign is on the receiving end of an instruction, it is wrong because I should not be modifying constants!


I start equated value names with an @ sign, thus:

@ONE   EQU   1
@FLAG1 EQU   X'80'

For equated symbols that define a length, I add the suffix ‘_LEN’ to the symbol name, thus:

FIELD1      DS    CL8

That way, it is obvious in the code that I am referring to an equated length value, for example:

          LA    R1,@FIELD1_LEN        Get field length in R1

It it obvious that I intend to load the field length and not the field address into the register but having the _LEN suffix ensures that I get the length and do not accidentally code the field address instead.

Equate/Field relationship

I prefer to ‘tie’ equated values for a field to the field they relate to by using a naming convention that uses the field name as part of the equate symbol, thus:

FIELD1         DS  C

Then in code I would write something like this:

          CLI   FIELD1,@FIELD1_YES
          BE   DO_YES
          CLI   FIELD1,@FIELD1_NO
          BE   DO_NO
          B     NOT_SET


Similar code would apply to setting the field value, only use equated symbols that are related to the target field. This convention ensures that you are only setting and testing for values actually defined for the field.

About the only exception to this rule that I use is the use of the following common equated symbols:

@YES    EQU   C'Y'
@NO     EQU   C'N'

@YES and @NO should be obvious, @POSTED is to test an ECB posted bit flag,E.G:

          TM    ECB1,@POSTED           Is ECB posted?
          BO    DONE


Field/DSECT relationship

When I am creating my own DSECTS I prefer to prefix the name of each field within the DSECT with the DSECT name and an underscore, thus:


Using this standard makes it easy when reading source code to determine who owns a field. There is no confusion. The length equate for the control block also follows my convention of adding _LEN to the end and prefixing the equate symbol with an @ sign to avoid ambiguity in the executable code.

The exception to the rule…

I do allow for one exception to this rule though and that is for a program’s working storage. Typically I prefix working storage fields with the prefix STG_ (short for ‘storage’) although you can use anything you want, to identify working storage fields, for example W_ etc. It is preferable to use the underscore as part of the name to avoid situations where a field name ‘might’ happen to start with whatever prefix you are using. For example if you use just W as your working storage field prefix, is WORKF1 a field in your working storage or somewhere else? Using this convention, STG_WORKF1 or W_WORKF1 are obviously part of the code’s working storage. And again, the convention ensures that you are using the field you intend and not a WORK1 field defined somewhere else that might allow the program to assemble but my then fail at execution time because the storage it refers to is not addressable or it is addressable but it’s the wrong field and messes up some other code (good luck finding that one!).


Whatever conventions you use, the aim is to make it less likely to introduce errors when developing the code and to make it much easier to read, and locate field owners several months later when someone else has to try to follow, understand and debug or modify your code.





Categories: Coding, Style

A Matter of Style…

November 9, 2017 Leave a comment

There are probably as many ways to write a program as there are programmers in the world, more if you accept that a program is solution or part thereof, to a problem and could be written in more than one language, each of which causes the writer to adopt differing styles.

So basically, there’s more than one way to skin a cat!

Now I’m not saying that the style of assembler programming that I have developed over the years is the best but I think it satisfies many of the requirements that I will cover in what I hope will be a short series of articles. Let’s put it this way, unless someone tells me different, I’ll tend to try to influence any code that I work in towards my preferred style although there are reasons for NOT doing that, for example to maintain the existing style or coding standards since a mix can make things worse.

SO! Why is style important?

Sometimes it isn’t. You may need to write a quick, one off, one use program to solve a problem. As long as it does the job, does style really matter? Well, it depends. If the code is simple then probably not but what if the code is complex? Then making the code at least readable and logical makes sense since if you get it wrong the first time through, it will be a lot easier to diagnose the problem and correct the code.

But let’s consider the more common situation where you are developing a program or even multiple programs that form a piece of software and those programs are for use by paying customers. In other words you are creating commercial software. I am also including software written for in house use (in the business sense) because such development, use and support usually involves internal billing and thus still qualifies it as commercial software.

Basically, if someone else other than the developer or developing department uses it, you can consider it to be commercial software in my mind.

So let’s say that you are assigned the task of developing a z/OS super widget application which, because it uses lots of system services will be written in assembler, or at least a lot of it will be. Let’s looks at how programming style will affect it’s life cycle.


The faster you can develop and test the code to the point where it does what the requirements require without failing or causing other problems, the faster the product will reach the market and thus generate revenue for the business.

Everyone probably has a standard program entry and exit macro and possibly similar ones for subroutine entry and exit but what about other stuff. Parsing of input for example? Do you do it with hard coded assembler or use a macro or service to do parsing in a standard and consistent manner? If a service, do you call that service using a macro to make coding the call simpler and standard? Or, do you let your programmers set everything up by hand (guess which is the more reliable method)?

Do you use a standard form of label name for program labels and subroutine names? Are you still stuck in the ‘all labels must be 8 characters long’ mental state or do you use longer and more meaningful names, either CamelCase or with parts separated by underscores, E.G. WORK_FIELD_1.

Do you use naming standards to tie equated symbols to the field they relate to or is it a free for all. Is it easy to tell, without looking at the variable definition,  if a variable name is a label or an equated value and if a label, is it easy to tell which control block it belongs to, or if a constant, is it easy to tell that it is a constant?

And most importantly in my mind, do you create comment blocks for the program as a whole and for each subroutine that describes the program, it’s inputs, outputs and return codes and the same for all subroutines? Also, if the program is reliant upon a particular control block structure, do you document that WITHIN the code as best you can (using good old EBCDIC character graphics if need be) so that the architecture is right there in the code, not lost on some hard drive somewhere that got wiped because someone didn’t realize that what was on it was needed (the ‘not my job’ syndrome!).


If the code is well structured, follows consistent  naming standards, uses macros to standardize the coding of interface, has proper comments describing what each routine does, what it’s input and outputs are then understanding the code and diagnosing a problem becomes a lot easier and more importantly faster. You are also more likely to be able to correctly fix any problem without introducing new problems. All the above result in a faster turn around time for the resolution of any code related problems. They also mean it is much easier for someone other than the original developer to pick up the code and work on it with confidence.

This all results in improved customer confidence in the product and better tolerance for any problems that do occur because they know that they can be quickly resolved.

New Features

Code that is well structured is much easier to modify to add new features to. And just like the maintenance situation, there is less likelihood of such changes introducing new problems into either the new or the existing code. This all helps to get the product out to market that much quicker and earning revenue.

The Bottom Line

If you are a commercial programmer, that is, it’s your job, your work earns the business that you work for, the money that pays your wages. The quicker you can develop new code or fix existing code directly affects the companies business results and therefore your earnings.





Categories: Coding, Style

ISPF Edit ‘Count’ macro

August 22, 2017 Leave a comment

Just a little macro I put together to count the number of occurrences of a string in a file when editing it.

/* rexx */ 
if text = '' then do 
 say 'Nothing to search for. Enter text to search for ith cmd' 
 return 0 
address isredit "find all "text 
address isredit "(num,lines) = FIND_COUNTS" 
if num > 1 then s='s' 
if lines > 1 then ls='s' 
if lines > 0 then on = ' on 'lines' line'ls 
say 'Found 'num' occurance's' of 'text||on

To run it, just type count followed by the string to search for, and any other parameters such as the column number on the command line when in edit and press enter. E.G.

COUNT ‘some string’







Categories: Uncategorized

The humble EJECT statement…

March 17, 2017 Leave a comment

Back in the day when dinosaurs still roamed the earth and programmers still knew how to use a card punch, there were printers. Not your desktop printer of today but huge big, noisy things with all the characters on a chain, a ‘ribbon’ as wide as the paper that fed vertically on rollers and a a row of hammers that stuck the correct character as it went past on the chain, if you got it setup right that is! These things also used a paper loop for carriage control that told it where various spots where on the page, in particular the top of the page. Happy days!

In those days your program listing got printed out, so to make it more readable you’d put EJECT statements into the source code, usually before a subroutine so that the subroutine and all the comments about what it did, its inputs and outputs etc; (you did document all that stuff  didn’t you?) were at the top of a page.

Fast (or not) forward to today and I doubt anyone actually prints program listings anymore. If you are anything like me it’s all online, either in SDSF, sent to a member of a ‘listings’ dataset or, if your source is off platform, your tool set grabs the output after the assembly has run and FTPs it back to your workstation/PC where again, you can view it online.

So, since the output is now all just one big stream, why would you bother putting EJECT statements into your source, other than from habit?

Now I will admit that I did this from habit until I got to thinking about it (hence this post!) and there is actually a very good reason to include EJECT statements in your source, especially before the start of a subroutine.

Here’s a little test program:

TEST     CSECT                   
         BAKR  14,0              
         LR    R10,R15           
         USING TEST,R10          
         USING WSA,R2            
         XR    R15,R15           
SUB1     DS    0H                
         LA    R1,FIELD1         
         XR    R15,R15           
         BR    R14               
WSA      DSECT                   
FIELD1   DS    F'0'              
FIELD2   DS    F'0'              

If you assemble this, the output looks like this:

                                     20 TEST     RMODE ANY          
000000                00000 00018    21 TEST     CSECT              
000000 B240 00E0                     22          BAKR  14,0         
000004 18AF                          23          LR    R10,R15      
                 R:A  00000          24          USING TEST,R10     
                 R:2  00000          25          USING WSA,R2       
                                     26 *                           
                                     27 *                           
000006 17FF                          28          XR    R15,R15      
000008 0101                          29          PR                 
000010                               30          LTORG              
                                     31 *                           
000010                               32 SUB1     DS    0H           
000010 4110 2000            00000    33          LA    R1,FIELD1    
000014 17FF                          34          XR    R15,R15      
000016 07FE                          35          BR    R14          
                                     36 *                           
000018                               37          LTORG              
                                     38 *                           
000000                00000 00008    39 WSA      DSECT              
000000                               40 FIELD1   DS    F'0'         
000004                               41 FIELD2   DS    F'0'         
                                     42 *                           
                                     43          END

In this code, I have let the using for WSA in the main line code ‘fall through’ into the sub routine (something I normally hate to do except for working storage based on R13). The only way you can tell that the LA,R1,FIELD1 instruction in the subroutine is using R2 as a base register for WSA is by looking at the assembled instruction.

However, if you add an EJECT before the SUB1 label like this:

SUB1     DS    0H            
         LA    R1,FIELD1

The output now looks like this:

000000                00000 00018    21 TEST     CSECT              
000000 B240 00E0                     22          BAKR  14,0         
000004 18AF                          23          LR    R10,R15      
                 R:A  00000          24          USING TEST,R10     
                 R:2  00000          25          USING WSA,R2       
                                     26 *                           
                                     27 *                           
000006 17FF                          28          XR    R15,R15      
000008 0101                          29          PR                 
000010                               30          LTORG              
                                     31 *                           
  Active Usings: WSA(X'1000'),R2  TEST(X'1000'),R10                 
  Loc  Object Code    Addr1 Addr2  Stmt   Source Statement          
000010                               33 SUB1     DS    0H           
000010 4110 2000            00000    34          LA    R1,FIELD1    
000014 17FF                          35          XR    R15,R15      
000016 07FE                          36          BR    R14          
                                     37 *                           
000018                               38          LTORG              
                                     39 *                           
000000                00000 00008    40 WSA      DSECT              
000000                               41 FIELD1   DS    F'0'         
000004                               42 FIELD2   DS    F'0'         
                                     43 *                           
                                     44          END

The difference is that you now get one or more lines (depends on how many usings are active) that show the current usings in effect at the very start of the subroutine. As a result I can see exactly which usings are active and whether I have inadvertently ‘inherited’ one from an earlier routine that should have been dropped.

As a matter of coding style, I prefer to always drop all active usings, including code base regs except for working storage that is based on R13, at the end of each routine. That way I
know I am using the correct registers and usings in each routine and by putting an EJECT statement in the source before each routine, I can easily see from the listing that I am not inadvertently inheriting a using from an earlier block of code that may allow the code to assemble but ultimately fail when it runs because I picked up the wrong base register.

Categories: Uncategorized

Drag and Drop with Regina Rexx…

June 13, 2016 Leave a comment

Since I’m a mainframe guy I’ve spent most of my career writing REXX whenever I needed a quick script which means that I know REXX pretty well. Lately I’ve been playing with REXX on Windows thanks to an implementation of REXX called  Regina Rexx, see here.

Of course, one of the things you can do on Windows is drag and drop so I wanted to be able to drop a file onto a Regina REXX program and have the REXX program then process the file.

Typically you access input parms in a REXX exec by using the “parse arg varname” statement but I found that this did not work when dropping a file onto a Rexx exec on my Windows machine.

After some research I found that I needed to create a shortcut to the Regina REXX.EXE program with a parameter that is the path and name to the REXX program to run. You then have to drop your files onto the SHORTCUT.


So let’s say I have the following exec called “test.rexx” on my desktop:

parse arg parms
say parms
say "Press enter to end" 
parse pull .

I then create a shortcut on my desktop that looks like this:

"C:\Program Files\rexx.org\Regina\rexx.exe" "C:\Users\ltlfrari\Desktop\test.rexx"

So if I drop a file called “test.txt” onto the shortcut, this is the output that I see:


Basically the exec receives the full path and file name as an input argument.




Categories: Uncategorized

Conditional Assembly Language…

May 10, 2016 Leave a comment

If you’ve ever read or written a macro you have no doubt at least seen conditional assembly language. It’s all that AIF and AGO stuff that forms a sort of ‘program’ within the macro so that it can generate code or whatever depending on whatever the input parameters are.

What’s really cool though is that it is not just limited to macros, you can use it within open code as well. So you might ask ‘why would you need to do that?’ but even if you don’t ask, here’s one interesting situation that came up recently.

I had some code that used a macro to generate a DSECT to map a control block. However we were switching version of the product that supplied the macro and a field within the macro had changed names even though it’s content had not. The result was that my code would only assemble with one version of the macro since with the other one it would get a not found error for the changed label. Since I did not want to have to co-ordinate my source code change with a build tool change the problem I had was how to make my source code support both versions of the macro and DSECT that it generated?

In case you have not guessed, the answer is conditional assembly language.

Here’s an example.

The old macro/DSECT:


The new version of the macro/DSECT




So my code originally looked something like this:


Obviously if I switch to the new macro library, my assembly will fail since the field ‘MYFIELD’ is no longer defined within the DSECT.

However, what you can do is to test to see if the variable ‘MYFIELD’ is defined and if not then conditionally change the code that gets assembled. Thus:

         AIF   (T'MYFIELD EQ 'U').NEWMAC
         AGO   .CONT


The AIF tests to see if the ‘type’ specification for the field MYFIELD is ‘U’, that is undefined. If it is undefined that means it has not been seen by the assembler (yet) so jump to the label .NEWMAC and continue to generate the code from there, which of course generates the code using the new field label of NEWNAME.

If the field MYFIELD is not ‘undefined’ then the assembler generates the code using the old field name, MYFIELD and then jumps (AGO) to the label .CONT to continue the assembly.

As a result, no matter which version of the macro library I am using, my code still assembles and works correctly.

There are other ways of achieving the same effect; For example by using the conditional assembly language to control the redefining of the old or renamed symbol to a common name and using that common name in the open code.

One gotcha though to be aware of. The macro/DSECT has to be defined in the source code BEFORE the conditional assembly code. If it is defined after the conditional code then, since the assembler has not seen either field at the time it encounters the test for the field being defined, it will always treat it as being undefined which would cause an assembly error when using the old macro/DSECT library because it would generate the code to use the new field name.

Categories: Coding, Mainframe