Archive for the ‘Mainframe’ Category

Mixing it up with C and assembler with METALC..

July 27, 2013 4 comments

A few months ago I posted this sample C code to reverse a character string:

// Reverse a string

#include <iostream>
#include <string.h>

using namespace std;

char * reverse( char * p) {
    char * e;
    char * s;                // Copy ptr because I am going to mess with it
    int l = strlen(p);
    s=p;                     // first char ptr
    e = p+l-1;               // last char ptr

    char * m;                // mid point of the string
    m=s+l/2;                 // set it (int arith drops remainder if odd length

    for (;s<m;s++) {         // just need to traverse half the string
        s[0]=s[0]^e[0];      // swap curr leading and current last byte
        e[0]=e[0]^s[0];      // using xor
        e--;                 // back up 1 from end
    return p;                // return same string back to caller


int main()
   char c[21]="12345";             // String to reverse
   cout << reverse(c)<<'\n';       // reverse and print result
   cout << c;                      // orig string also reversed
   reverse(c);                     // reverse in place
   cout <<'\n'<<c;                 // show it is reversed

   return 0;

Pretty standard loop driven stuff and ‘adequate’ shall we say.

One of my ‘to do’ items this year is to go through the POP (principles of operation) manual and update my knowledge of a lot of the new instructions in there as it’s been forever since I last did that. Anyway, one of the instructions I came across was MVCIN or Move Inverse. This is a bit like MVC (Move Characters) except that it reverses the string as it moves it. Cool, except that we are in C and that is an assembler instruction and it’s a lot of extra work to write a sub routine you could call from a C program to do that.

But wait, there’s more (otherwise this would be a pointless post!)…

On the z/OS mainframe, the run time environment for C (and other programming languages like Cobol and PL/I) is normally provided by Language Environment (LE). BUT! If you use the C compiler option METALC, you can compile C code that has NO LE dependency. It is completely stand alone. Not only that but you can embed small snippets of ASSEMBLER code directly into your C source code and it compiles just like part of the regular program.

What this means is that I can replace that ugly for loop in the code above with a SINGLE instruction that will do the same thing, and do it far more efficiently than that loop will.

Now bear in mind I have not tried this code yet so it may need some tweaking but this is what I have so far:

// Reverse a string

#include <iostream>
#include <string.h>

using namespace std;

char * reverse( char * in) {

    int l = strlen(in);
    char * work;
    work = new char[l];         // get a temp work area 

    __asm (     
      "    MVCIN %0(%1),0(%2)      \n"  
      :"=m"(work) : r(l), "m"(in[l-1])                         

    memcpy(in,work,l);       // copy reversed string in work to orig input area
    delete [] work;          // release work area

    return in;               // return same string back to caller



int main()
   char c[21]="12345";             // String to reverse
   cout << reverse(c)<<'\n';       // reverse and print result
   cout << c;                      // orig string also reversed
   reverse(c);                     // reverse in place
   cout <<'\n'<<c;                 // show it is reversed

   return 0;

I have to allocate a work area to receive the reversed string since you cannot overlay the source with the output so the question becomes, is the cost (in CPU time) of allocating that work area and releasing it greater than the saving incurred by switching to a hardware instruction to do the reverse. You could however get around that to some extent by requiring the caller to provide the work area.

The only other downside is that this instruction is limited to 256 character strings so for anything longer you would have to implement some sort of loop but even so, the fact that you could reverse the string in 256 byte chunks must have some performance improvement implications.

In case you are wondering, the %0, %1 and %2 represent the positional parameters below the instruction. The fields after the first colon are output fields and the fields after the second colon are input parameters.

The parameter (“=m”(work) after the first colon tells the compiler this is an output field, it will be modified by the code and that it is a memory reference.

The two parameters (r(1) and “m”(in[l-1]) are input parameters. r(1) means this should be a register and the second memory reference (the source string) is a memory reference. It has to be the address of the LAST character of the source string, not the first, hence the [l-1] after the field name.

As soon as I get a chance I’ll try compiling this and seeing it it actually works. Look for another update soon!

Categories: Coding, Mainframe Tags: , ,

Anonymous functions in REXX

June 19, 2013 Leave a comment

In this blog post Martin Packer was experimenting with modernizing REXX on the mainframe. One topic he mentioned but did not resolve was the use of anonymous functions in REXX.

An anonymous function is a block of code that is not associated with an identifier like a function name. It’s just a block of code you can pass to something else (another function) to run. It’s used a lot in things like jQuery to implement ‘callbacks’ from the invoked function back into your own code.

So I got to thinking about this and did a little experimenting. The following is ‘close’ in that I can pass a block of code into a called function and have it execute there. It could implement some sort of ‘callback’ by for example returning data to the original caller on the stack or via an external data set but I did not go that far. I just wanted to see how hard it would be to run some code passed in by the caller. So here’s my solution:

This first program is the caller of some function that will use the anonymous function code block I pass to it. I define the anonymous function in a comment block in the program (makes it easier to write anything longer than a couple of lines) and load it into a variable using sourceline. Notice that each line of the anonymous function has to end with a semicolon since all the lines get concatenated together into one big string.

/* rexx */                                                             

rc=z67('parm for z67',func,'This is a message')                        
return 0                                                               

loadfunc: procedure                                                    
parse arg reqdname                                                     


do i = 1 to sourceline()                                               
   line = sourceline(i)                                                
      when substr(line,1,11)='/*FUNCTION ' then do                      
         parse var line . name .                                       
         if name=reqdname then do                                      
      when substr(line,1,2)='*/' & indata=1 then leave                 
      otherwise do                                                     
         if indata then code = code||line                              
      end /* select */                                                 
   end /* do i=1 to sourceline() */                                    

return code   

/*FUNCTION MYANONFUNC                                                      
trace "I";                                                             
parse arg msg;          
say msg;   
trace "O";              
return 0;                

The following code is the called function (called z67 in this case), It accepts a parameter for its own use (myparm) which is just displays to demo the ability, the function string itself (func) and any data (funcdata) to be passed to the anonymous function from the caller.

/* rexx */                           
parse arg myparm, func, funcdata     
say "myparm is ("myparm")"           
return rc                            

x: procedure expose func             
interpret func                       
return 0

The code above parses out the three arguments passed in, displays the first one and then calls an internal function (x) passing it the function data for the anonymous function (funcdata). That allows the passed code to parse any arguments passed to it by the original caller. The ‘x’ routine gets to the function code itself by exposing the ‘func’ variable and just executes it using interpret.

Notice that the anonymous function code can also accept arguments from the original caller and do things like turn on trace. It would be quite easy to modify this code so that a variable number of parameters could be passed to the invoked passed function.

Categories: Coding, Mainframe, REXX Tags:

C for z/OS assembler programmers…

June 5, 2013 Leave a comment

A few C Data structures mapped to assembler equivalents:

C                        asm 

int i;                   i       ds    f       uninitialized storage area
int j=0;                 j       dc    f'0'    initialized to zero 
int *name;               name    ds    a       uninitialized 'address of' storage area 
int *name1=0;            name1   dc    a(0)    'address of' initialized to zero 
int *name2=&i;           name2   dc    a(i)    'address of' i 

struct astruct {        astruct        dsect 
int a;                  a        ds    f 
int b;                  b        ds    f 
char c[10];             c        ds    cl10 
}                       @astruct equ *-astruct        length=18 

sizeof(astruct)                  dc    a(@astruct)        length of struct/dsect

union aunion {          aunion   ds    dsect 
int k;                  k        ds    f 
                                org    k 
doube l;                l        ds    d        l overlays k 
                                org    k 
char cl10;              char     ds    cl10     char overlays k and l
                                org , 
}                        @aunion equ *-union        length=10 

sizeof(aunion)                   dc    a(@aunion)   length of the union


Categories: Coding, Mainframe

Mixing it up with C and Assembler on z/OS…

June 5, 2013 1 comment

Calling an assembler program from a C program.

Parameter passing conventions

On z/OS (or MVS if you are old enough), programs written in ‘traditional’ mainframe languages such as assembler, Cobol and PL/1 typically pass parameters to each other using the following convention:


Basically register 1 points to a list of addresses and each address in the list points to the actual data for that parameter. The end of the address  list is usually (but does not have to be) indicated by the high order bit being on in the last entry. If the last entry is not indicated then only a fixed number of parameters can be passed since the called program has no way of knowing how long a variable length list is.

Since the last entry is indicated, it is very easy to pass a variable number of parameters to a called program.

There is of course NO type checking of the data between the calling and called program. If you pass the wrong type or length of data in a parameter or if you do not pass enough parameters and the called program does not check the end of the list then it is highly likely that things are going to go wrong!

C on the other hand has all sorts of type checking for the parameters being passed between the calling and called program. Typically (and I am sure you already know this) in C, the type and form of the parameters to be passed to another called program are defined in a header (H) file. Something like this:

int mysubrtn(int pntr2Parm1,int Parm2);

Which says the module called ‘mysubrtn’ accepts two parameters, both integers (fullwords on z/OS) and returns an integer.

If you try to call ‘mysubrtn’ with any other types of parameters or with a different number of parameters the code will not compile.

You can tell C that the called program accepts a variable number of parameters by including three periods after the fixed number of parameters (one in this case):

int mysubrtn(int pntr2Parm1,...);

You do however then lose the type checking for the variable parameters.

The main issue when it comes to calling an assembler program is that C does it’s own thing in the way it passes parameters between programs. Basically the problem boils down to the fact that by default, C and assembler do it differently. So the first step in calling an assembler program from a C program is telling the C compiler to generate code that is compatible with the operating systems way of doing things!

The #pragma linkage statement

The #pragma linkage statement tells the C compiler that when the named program is called, it should use the standard OS convention to pass the parameters to the called program. The statement looks like this:

#pragma linkage(myasmrtn,OS)

Typically the named program is the main entry point (usually the CSECT for an assembler program) for the target program but it does not have to be.  In fact there is nothing to stop a program, especially an assembler program, from having multiple entry points declared with each taking different parameters and/or performing different functions. so long as you code a #pragma linkage entry for each entry point, it will work fine.

On it’s own, this is enough to be able to call an assembler program from your C code. In the absence of anything else the default is that the C code expects the assembler code to return an int (fullword), typically a return code. And of course, there is no parameter checking since the C code currently knows nothing about the parameters the assembler code requires.

So you could do something like this:

int rc;

Or this:

int rc;
int parm1;

However you could also call it like this:


The difference between the last two examples is that the first one creates a data structure like this:


While the second one creates a data structure like this:

zOSLinkage_2If your called program expect the first form and you use the second form, you are going to be in trouble since you will end up using the address of the parameter as the parameter itself.

Adding parameter checking for the assembler program to  your C code

You do this in exactly the same way that you would do it for a C program, that is by defining a model call statement, either inline or in a header (H) file. For example:

int myasmrtn(int parm1);

Since you have declared the number and type of parameters the the assembler code expects, the following will no longer compile:

int rc;
int parm1;


int rc;
int parm1;

Or even:

int rc;
char parm1[]="some string";

The cools part though is that since we are now defining what parameters the program will accept, we can also define what it will return. For example, instead of returning a simple return code in an int, it could return the address of a work area:

int *myasmrtn(parm1);

Function overloading

Lets assume that you have an assembler program that can accept a variable number of parameters with the first parameter indicating the call type which in turn dictates what the following parameters will be. Now we want to call that program from our C code and just to be safe we’d like to use some form of parameter checking. Just to make things easier, we have the source of the assembler program so we can modify it to help us achieve this goal.

In assembler the executable code typically starts with the CSECT (or RSECT) statement and this also defines the main entry point for a program. You can however define other labels within the code as alternative entry points like this:

         Some code
         Some other code

Or even this:

         Some common code

Notice that in the second example, we have two entry points that both invoke the SAME code. As far as the executable assembler code is concerned this is the same entry point, the actual code would not care. But from a C point of view we can now do something like this:

#pragma linkage(myasmrtn,OS)
#pragma linkage(altentry,OS)
int myasmrtn(int calltype, int parm1, int parm2);
int* altentry(int calltype, char *parm2, int parm2)

When calling ‘myasmrtn’ the C compiler will check that I am passing three integers (fullwords). When calling ‘altentry’, the C compiler will check that I am passing an int, a char string (actually the address of the string) and another int. In addition, when I call ‘myasmrtn’ I expect an integer back (such as a return code). When I call ‘altentry’ I expect the address of an int (although it could actually by anything, even a getmained area) back.

From the assembler perspective, it does not care about all that but from the C perspective, we now have some basic parameter checking at compile time.

Categories: Coding, Mainframe

Return Codes…

Ever wonder why return codes are usually multiples of 4? Well if so (and even if not), here’s the answer.

On IBM mainframes, return code handling after calling a sub routine in assembler is typically done using a branch table that is indexed into using the value in a register (usually 15), so you’d get something like this:

          CALL  SUBRTN            CALL A SUB ROUTINE
          B     NEXT(R15)         BRANCH INTO TABLE INDEXED BY R15
NEXT      B      OK               RC=0, OK
          B     RC4               RC=4, WARNING
          B     RC8               RC=8, ERROR
          B     RC12              RC=12, REAL BAD ERROR
          B     RC16              RC=16, QUIT

So why the multiple of 4?

Branch instructions to a label such as B OK as shown above occupy 4 bytes. The B NEXT(15) branches to the label NEXT + whatever is in R15, so when R15 contains zero, it branches to the first branch instruction that then branches to the label RC0. When R15 contains four, it branches to the second branch instruction (label next + 4) that then branches to the label RC4 and so on with each increasing return code representing a more sever error.

To keep the branch tables short, return codes were typically limited to zero through sixteen, thus giving 4 ‘error’ return codes in addition to zero (OK).

Obviously, if the sub routine returned with any other return code such as 2 for example, you would end up branching into some address other than a valid branch instruction with who knows what result!

So, now you know.

Categories: Coding, Mainframe

Dynamic ISPF Panels – Getting the Input Data

April 23, 2013 Leave a comment

In the previous article I looked at the general process of building the ISPF screen dynamically, including the creation of input fields. Now it’s time to get the users input from those fields into the application. Again I am using REXX as the driving code.

In a typical static ISPF panel, fields are referred are named and the data in them can be set and retrieved by variable name within the application.

With a dynamic ISPF panel there are no variables that you can refer to, only the variable being used to refer to the data string that is the panel definition. when the user enters some data into the input fields on your screen and presses enter, ISPF rebuilds the entire data string and updates the dynamic screen variable, complete with text fields you output, attributes and input data from the user. You have to manually extract the input data from that string and place the data into your own variables.

The process of extracting the input data from the screen variable is a lot simpler if you only ever use one attribute character to mark the start of an input field and one attribute character to mark the end of an input field.

In the previous article I had the following hexadecimal attribute definitions in my skeleton panel:

01 TYPE(DATAOUT) INTENS(LOW)                      
02 TYPE(DATAOUT) INTENS(HIGH)                     

And I built each input line using the following code:

dynvar=doy||SetSize('Enter something..............',24)||din||, 

The effect of the above code is that the start of each input field is marked by a ’03’x attribute byte and the end of the input field is marked by an ’01’x attribute byte.

The following piece of code extracts the input fields from the screen variable and places them into stem variables, then sets the total number of input fields in the dot zero stem variable:

do while pos('03'x,dynvar) <> 0                   
   parse var dynvar . '03'x data '01'x dynvar     


Hopefully it is now obvious why I chose to use hexadecimal values for attribute characters. If I had used something the user could enter, as the end of field attribute byte then if the user entered that character, the above code would only extract data up to that character, in effect cutting the input data short.

If you use multiple differing attribute bytes to delimit the start or end of the input fields then the above code becomes more complex since you have to allow for the multiple differing start and end of field markers.

This technique works fine if the input fields are always in the same order on the screen. Remember, there is nothing in the input data from the screen that really defines what each field contains unless you look at the text that occurred before it, so you have to know that the first input field always contains data for a specific item, the second input field always contains data for the next specific item and so on.

If your application can build the screen with input fields that appear in differing order for different situations then YOU have to keep track of the order of the input fields on the screen and extract the data into the appropriate variables for your application code to process.

One way is certainly to look at the prompt that occurred before the input field and use that to determine what the data is and which variable it should go into in the application and I am sure there are many other ways to achieve the same thing. As all the best books say, this is left as an exercise for the reader!

Categories: ISPF, Mainframe, REXX

Dynamic ISPF Panels

April 19, 2013 4 comments

Typically, ISPF panels are very static in nature, at least as far as the content is concerned. Input and output fields are ‘named’ and can be set and contents retrieved by the underlying application through the use of variables and variable services. In addition the panel can contain a wide variety of processing statements to validate the input, thus relieving the application of much of that work.

In all my years working with mainframes I have never worked or had the need to use any other type of ISPF panel – until now!

I was looking at adding an ISPF based ‘wizard’ to some code and wanted to make is a ‘consumable’ as possible.

Well, the reality is that I could have used the typical ISPF static panel but I decided I wanted to try to mimic the more modern dynamic web page applications where the content of the page changes depending upon the options you select.

What I found was that you can define a ‘dynamic’ area in an ISPF panel and actually build the contents of the screen in the application. While I found some general guidelines on how to do this I did not find any specific examples of what you might call ‘real world’ code and so, as usual, I had to invent it all myself!

This article is the result of my experimentation. There are some things still missing like cursor positions for one and handling input and output fields that change order (you’ll see why this is important later on) but should you feel the need to have a go with creating dynamic ISPF panels, hopefully this will help get you started. They are pretty cool!

By the way, since this was simply an exercise my example code is all done in REXX to keep things simple and easy to change.

Panels and execs are all in the the standard ISPF locations. nothing fancy there.

The Panel Definition

Even though you can build pretty much the entire panel in the application, you still need a basic panel definition in the ISPPLIB concatenation. This is mine:


The dynamic area is defined by the ‘$’ symbol in the panel. It extends the full width of the panel (80 chars) and is referred to by the name DYNVAR. The ‘$’ are NOT treated as attribute characters and do NOT take up space on the panel, they are in fact part of the dynamic area.

I also defined several hexadecimal attribute character strings x’01’ through x’05’. I used hexadecimal characters so that there is no way the user can enter them into the input areas. This is important because YOU have to find them in the screen var (DYNVAR in my case) in your code and if you sued a character the user could enter you might mistake user input for attribute characters.

Cursor positions is currently rudimentary and fixed. I position the cursor at column 27 in the dynamic screen variable DYNVAR, which is the location of my first input field.  Ifthe location of the first input field could move I would have to make that positioning more dynamic by using a variable and setting the offset in the dynamic var in the application code. For now, this works though!

The Dynamic area

In my case, the dynamic area is 80 characters long. The way it is built is that the first 80 characters of whatever is in the DYNVAR variable are used to build the first line of the dynamic area. Then the next 80 characters are used for the second line, the next 80 characters for the third line and so on.

So, if I do the following in the application exec (My test panel is called Z991):

dynvar='This is a test.'

I would get this:


Notice the cursor is where I positioned it in the panel definition in spite of the fact there is nothing there yet!

In order to get something to display on the next line I have to pad that text out to 80 characters and then add the next line of text. So I’d need driving code something like this:

dynvar=fit2line('This is a test.')||'This s line 2.'

Which gives me this:


In case you are wondering what ‘fit2line’ is, it’s a little function I wrote that just pads out whatever is passed to it to a multiple of the line length (80 characters in my case). It looks like this:

Fit2Line: procedure             

parse arg in                    
ll=80      /* must = line length */
l=length(in) // ll              
out=in||copies(' ',ll-l)        
return out

If I wanted to add a third line, the dynvar assignment would be something like this:

dynvar=fit2line('This is a test.')||fit2line('This s line 2.')||'This is line 3.'

Input Fields

So now lets add an input field at the cursor location.

Since DYNVAR starts in column 1 (remember the ‘$’ is NOT an attribute, just a marker for the dynamic area) and the cursor is at column 27 on line 1 of the area, we need to make the data up to the cursor position 26 characters long.

However, Since the cursor is at position 27 and this is going to be an input field, we need an attribute character just before it. It would also be nice if we had an attribute character before the text as well.

If you go back to the panel definition, you will see that it has the following attributes defined:

01 TYPE(DATAOUT) INTENS(LOW)                      
02 TYPE(DATAOUT) INTENS(HIGH)                     

Each 2 digit value represents a single hexadecimal attribute byte.

The input attribute is hex ’03’ and there are a variety of output attributes. I’m going to make the text yellow so I am going to use the hex ’05’ value.

My code now looks like this:

dynvar=doy||SetSize('Enter something..............',24)||din

I defined the attributes as variables to make them easier to recognize.

SetSize is a simple routine to set that length of a string to a specified size. It looks like this:

setsize: Procedure                        
parse arg in , l                          
if length(in) < l then return left(in,l)  
return left(in||copies(' ',l),l)

This is what the resultant panel looks like:


Slight problem! We did not set the location of an attribute to indicate the end of the input field/start of the next field. Hence it was continued all the way to the end of the dynamic area!

Remember that everything follows on sequentially so we need to set a string to the required length of the input field and then add in a closing attribute byte. The input field string can be spaces or it can contain some initial data. So now my code looks like this:

dynvar=doy||SetSize('Enter something..............',24)||din||, 

I’ve added a ‘dataout low’ attribute (see panel definition) and set the input field size to 8 characters. I’ve also initialized the input field data to the word ‘here’. Now my screen looks like this:


If I wanted to add another input field on the third line so that there is a blank line between the input lines then my code becomes something like this:

dynvar=doy||SetSize('Enter something..............',24)||din||,   
dynvar=fit2line(dynvar)||fit2line(' ')||,                         
       doy||SetSize('Enter something else.........',24)||din||,   

Notice that after setting the first line up I had to use the fit2line function to expand it to fill the whole line. Then I used the fit2line function with a single space to cause it to add a blank line and finally I added the new line to the dynamic variable. My screen now looks like this:


Of course the data I have displayed in the input fields could have come from variables instead of hard coded literals, as could the prompts.

In the next part of this article I shall look at the steps needed to extract the input data from the screen into the application.

Categories: ISPF, Mainframe, REXX


April 1, 2013 2 comments

The PARMGEN configuration tool can be a little daunting and overwhelming when you first start to use it but the reality is that it is really pretty simple to use. This article walks you through creating an initial Run Time Environment (RTE) from scratch. This example assumes the RTE being created will run on the same LPAR that you are running PARMGEN on.

It’s probably simplest to think of an RTE as all the ITM and OMEGAMON components that you want to run on a single LPAR. To keep things simple though, I suggest you only configure one agent initially, get that running and then add others in later on, one at a time. Once you have a working RTE with everything in it, it becomes simple to clone it later on to create RTEs for other LPARs.

Start PARMGEN by entering the command EX ‘hlq.TKANCUS’

Where hlq is the hilh level qualifier of the SMP/E TKANCUs install library. That will bring up this screen:


Select option 5 “Configure z/OS products with Parameter Generator Workflow (PARMGEN)”. That will bring up this screen:


PARMGEN needs some initial information to get started.

In the GBL_USER_JCL field, enter the name of a partitioned data set that PARMGEN is to use to contain various control members and JCL decks. If the data set does not exist, PARMGEN will create it for you.

In the RTE_PLIB_HILEV field, enter the high level qualifier of the work data sets that PARMGEN will need to create for each RTE. The work data set names will be of the form RTE_PLIB_HILEV.rtename.something.

Now lets go through the steps to create an initial basic RTE with just one product (I’ll use OMEGAMON XE on z/OS for this example):

Since this is the first time using PARMGEN, the RTE_NAME field will be empty so enter an name for the RTE. Typically I use the SMF ID of the lpar.

Select option 1.

This creates the work data sets needed by PARMGEN for this RTE. You will be presented with a series of screens:

On this screen(KCIP@PG1), enter any job card information but leave the rest empty and press enter:


On the next screen (KCIP@PG2) leave GBL_INST_HILEV empty (this is new RTE from scratch, not a conversion from ICAT), enter any unit/volser/etc information needed for the SMP/E install libraries (TK* data sets) and the unit type for the global work data sets, then press enter.


On the next screen (KCIP@PG3), review the prefilled fields and enter any additional ones needed  for your installation, mostly related to SMS information. Specify the TEMS type (HUB or REMOTE) at the bottom and press enter.


The next screen (KCIP@PG4) will give you a list of all the products currently installed into the SMP/E environment. Press enter to continue:


On the KCIP@PG5 panel, EXCLUDE the products components that you do not want, then change the Confirm flag from N to Y and press enter:


To create an RTE with ITM and OMEGAMON XE on z/OS in it, on the list above I would exclude everything EXCEPT KDS, KM5 and KOB. KDS is ITM and provides the framework in which everything else operates, KM5 is the OMEGAMON XE on z/OS product and KOB is base code needed by OMEGAMON XE on z/OS and also provides the enhanced 3270 User Interface.

As this is completely new RTE, skip the resultant popup by pressing enter:


Submit the KCIJPCFG JCL that is presented. This creates the work data sets needed by the rest of the PARMGEN process.

Select option 4

Submit the job JCL. This will load the work data sets with additional members.

Select option 8

Edit the $GLB$USR member

From this menu, select option 2 first (You’ll only need to do this step this one time, even if yo make changes to the RTE later on so let’s do it now):


The $GLB$USR member of the WCONFIG work data set is the user copy (I.E yours) of the IBM supplied defaults member ($GBL$IBM) of the same data set. The IBM supplied member contains default data set names for data sets required by the RTE and also the data sets that will be used to receive things like started task JCL procs.

Th entries in the $GLB$USR member re all commented out but you can uncomment them and change them you your own site specific data set names as required. Typically I do not let PARMGEN override my production SYS1.PROCLIB or VTAM libraries and so change those settings to point at my own staging libraries from where I later manually copy the members as required to my live system libraries.

Edit the RTE member

Now we get to the bulk of the configuration for the RTE. Select option 1 from the menu above and you will be placed in an edit session for the RTE member name in the RTE’s WCONFIG data set.

In spite of all the parameters in the deck, you really only have to change a few initially at least.

You may want to review the communications protocols that the RTE will use. If any are enabled, they should also be enabled at the hub TEMS. Typically I remove the SNA option but your site will have it’s own requirements.

You may also need top change the default IP ports to be used by the environment but if possible, stick to using the defaults (1918 etc).

If this RTE is NOT a hub you will need to specify the ip address (in KDS_HUB_TCP_HOST) and port (in KDS_HUB_TCP_xxxx_PORT_NUM) of the hub.

Each product is configured within its own section in the RTe member so they are easy to find. Just go though the in turn (only configuring a couple of product to start with keeps this as simple as you can get it) and set anything that needs changing. for the most part you’ll only need to change things like VTAM node names and possibly started task JCL member names.

Since this is your first RTE, it’s probably going to be for a test system so if possible turn off (do not configure) any security to keep things a simple as possible. I would also suggest NOT using system variables initially, you can always change that later on.

Once you are done, PF3 to save and exit, then pf3 back to the main PARMGEN menu.

Select option 9

When not using system variables for an RTE, I always run step 9 to validate the RTE before proceeding. submit the job JCL that is presented. When the job completes, if it has any errors, you can see the validation report in WCONFIG($VALRPT) by entering 9S on the main menu (press enter to clear the job information that is initially displayed and go to the report).

The first part lists the input data sets and numbers them.

The second part (labeled section 1) lists any errors along with the input data set number and the line number within that input deck.

Make a not of any errors, then go back to step 8 and make changes to the appropriate members (probably only $GBL$USR or the RTE member), then rerun step 9 to validate the input again.

Select Option 10

The $PARSE step takes all your input and loads the interim work data sets with everything needs for the actual generation process which is coming up next.

Select Step 11

This brings up the submit menu. While you can run the jobs individually, it’s probably simplest initially, to just select the KCIJPSUB composite job and run them all in one go.

Deploy the RTE

If you changed the $GBL$USR member to specify staging libraries for data sets such as SYS1.PROCLIb and SYS1.VTAMLST then you can review the members in the staging libraries before copying the members you your systems live libraries.

At this point you should be able to start the stated tasks and have the environment come up on this system. Remember, this example assumes you are running PARMGEN on the SAME system that the RTE will execute on.

Making Changes

Now that your RTE us up and running, to make changes do the following:

From the PARMGEN main menu, Select option 8 and then select option 1 to edit the RTE member.

Make the changes you need to the RTE parameters.

Select option 9 from the PARMGEN main menu to validate the RTE. If anything is wrong, edit the RTE again and repeat.

If no errors, select option 10 to run the $PARSE step.

Then select option 11 (submit) to build the RTE. You can safely run the composite job again or you can run individual jobs if you know what the change you made affected.

Deploy any changed members in the staging libraries if using and restart the started tasks.


While the above may seem like a lot to read, it actually takes far less time to do than it does to read and once you have the RTE up and running, making changes is just a matter of changing the parameters in the RTE deck and regenerating everything.

Categories: ITM, Mainframe Tags: , ,

SMP/E Install of ITM and related products

October 11, 2012 Leave a comment

A customer was recently installing ITM and a number of related mainframe products and was asking about the SMP/E install jobs. Since each product provides it’s own there was a lot of duplication in the jobs and this customer had reverted to manually building their own composite install jobs.

This how it ‘should’ be done….

Basically, you have to install ITM first which will include the installer (KCI) from the 510 media.

Then in the TKCIINST pds you execute the KCIRJG00 exec to run the job generator:

ex ‘hilev.TKCIINST(KCIRJG00)’

This is described in the ITM 623 program directory in section 6.1.4.

The program directory is  here :

Then you follow the steps on the screens and it will create a load of members in the output PDS, all prefixed KCI, that include the composite SMP/E setup jobs such as the dddef, data set allocation, receive, apply and accept for all the products you selected.

This is the initial screen you see:

Enter the name of a PDS data set to receive the composite members. The EXEC will allocate the data set if it does not exist.

Note. PF1 (or F1 if you are not old enough to remember that they used to be called ‘Program Function’ keys!) brings up a help screen that describes the process and each of the screens.

Next, select the product versions you want to install:

Fill out the following screens:

On the following screen, remove the asterisk in front of each data set name or over type the complete name to suit your environment.

Once you complete all the input screens the EXEC will create the output members in the PDS you specified on the first screen.

The KCIJGLOG member in the output data set contains descriptions of each of the jobs and the order to run them in.

Categories: ITM, Mainframe

Psssstttt…. Wanna buy a mainframe?

September 27, 2012 Leave a comment

On a whim I typed “IBM Mainframe” into e-Bay and there’s a IBM zSeries z9 BC 2096-O04 mainframe for sale, yours for five bucks shy of thirty grand!

Categories: Mainframe