In this post I talked about securing the enhanced 3270 User Interface with RACF
Since then a new level of the base code (FMID HKOB700) called Interim Feature One (IF1) has arrived in the form of PTF UA69877. But before you go off and apply that, don’t! Instead apply UA70618 which fixes some issues with the original code that may impact certain users.
In the hold doc (you do read all the hold doc don’t you!) for UA70618 are instructions on setting up new RACF profiles that may be needed if you are using security to protect the Enhanced 3270 User Interface environment. These are the new resources being checked:
KOBUI.USER.COMMAND.<command_name> KOBUI.ADMIN.PREFS.AUTOUPDATE KOBUI.ADMIN.LISTUSERS KOBUI.ADMIN.TRACE.UI.<trace_type> KOBUI.ADMIN.TRACE.INTERNAL.<trace_type> KOBUI.ADMIN.USEHUB.<hub_name> KOBUI.ADMIN.MEMBER.WRITE.<dd_name>.<member_name> KOBUI.ADMIN.ITM.<hub_name>.SERVICEINDEX KOBUI.ADMIN.ITM.<hub_name>.<servicepoint_name>.SERVICECONSOLE KOBUI.ADMIN.ITM.<hub_name>.<servicepoint_name>.SOAPCONSOLE SYSTEM.<managed_system_name>.<table_name>
You could protect these with the following RACF profiles:
PERMIT KOBUI.USER.** PERMIT KOBUI.ADMIN.** PERMIT SYSTEM.**
Recently I came across a problem where a customer needed additional RACF profiles setting up in order to log on to the Enhanced 3270 UI. These are:
The easiest way to add these would be with a UACC or READ but your installation standards may require a different implementation. I believe a tech note will be forthcoming on the issue soon.
This particular user had a default profile of * in the RACF class with a UACC of NONE so anything that was not specifically permitted was rejected. If you do not have such a profile in the RACF class used by the Enhanced 3270 UI then the default action is to allow the request if a profile does not exist which basically allows anyone to do anything unless you specifically lock it down. That approach results in the least amount of work to secure the Enhanced 3270 UI environment.
The deployment phase (step 11 on the menu) of creating and loading an RTE with PARMGEN copies all the run time data to the run time libraries. Some of these members need to go into system libraries such as SYS1.PROCLIB and SYS1.VTAMLST or your user versions of same.
Rather than overwrite my current live run time procedures and VTAMLST members I configure PARMGEN to write them to ‘staging’ libraries. From there I can double check them against the current live members before committing them.
To configure PARMGEN to write to your own system data sets, edit the $GBL$USR member in WCONFIG by selecting option 8 on the main menu and then option 2 and change the highlighted lines shown in this screen shot:
You have to manually create these data sets yourself but that is easy enough using ISPF option 3,2.
After the initial deployment, most of the time you will only need to copy run time JCL from the proclib and possibly vtam list members from the vtamlst staging libraries to your actual live system data sets.
The enhanced coloring feature of the editor is an ISPF installation option and may not be available on your system but if it is, you can use the ISPF HILITE command to make RTE members easier to read.
This is a typical RTE display without any highlighting (think ‘sea of green’):
Entering the command HILITE in the command area of the screen will bring up this menu:
Set the language to Assembler and the Coloring option to 2 and exit (pf3) and now your RTE will look like this:
The PARMGEN configuration tool can be a little daunting and overwhelming when you first start to use it but the reality is that it is really pretty simple to use. This article walks you through creating an initial Run Time Environment (RTE) from scratch. This example assumes the RTE being created will run on the same LPAR that you are running PARMGEN on.
It’s probably simplest to think of an RTE as all the ITM and OMEGAMON components that you want to run on a single LPAR. To keep things simple though, I suggest you only configure one agent initially, get that running and then add others in later on, one at a time. Once you have a working RTE with everything in it, it becomes simple to clone it later on to create RTEs for other LPARs.
Start PARMGEN by entering the command EX ‘hlq.TKANCUS’
Where hlq is the hilh level qualifier of the SMP/E TKANCUs install library. That will bring up this screen:
Select option 5 “Configure z/OS products with Parameter Generator Workflow (PARMGEN)”. That will bring up this screen:
PARMGEN needs some initial information to get started.
In the GBL_USER_JCL field, enter the name of a partitioned data set that PARMGEN is to use to contain various control members and JCL decks. If the data set does not exist, PARMGEN will create it for you.
In the RTE_PLIB_HILEV field, enter the high level qualifier of the work data sets that PARMGEN will need to create for each RTE. The work data set names will be of the form RTE_PLIB_HILEV.rtename.something.
Now lets go through the steps to create an initial basic RTE with just one product (I’ll use OMEGAMON XE on z/OS for this example):
Since this is the first time using PARMGEN, the RTE_NAME field will be empty so enter an name for the RTE. Typically I use the SMF ID of the lpar.
Select option 1.
This creates the work data sets needed by PARMGEN for this RTE. You will be presented with a series of screens:
On this screen(KCIP@PG1), enter any job card information but leave the rest empty and press enter:
On the next screen (KCIP@PG2) leave GBL_INST_HILEV empty (this is new RTE from scratch, not a conversion from ICAT), enter any unit/volser/etc information needed for the SMP/E install libraries (TK* data sets) and the unit type for the global work data sets, then press enter.
On the next screen (KCIP@PG3), review the prefilled fields and enter any additional ones needed for your installation, mostly related to SMS information. Specify the TEMS type (HUB or REMOTE) at the bottom and press enter.
The next screen (KCIP@PG4) will give you a list of all the products currently installed into the SMP/E environment. Press enter to continue:
On the KCIP@PG5 panel, EXCLUDE the products components that you do not want, then change the Confirm flag from N to Y and press enter:
To create an RTE with ITM and OMEGAMON XE on z/OS in it, on the list above I would exclude everything EXCEPT KDS, KM5 and KOB. KDS is ITM and provides the framework in which everything else operates, KM5 is the OMEGAMON XE on z/OS product and KOB is base code needed by OMEGAMON XE on z/OS and also provides the enhanced 3270 User Interface.
As this is completely new RTE, skip the resultant popup by pressing enter:
Submit the KCIJPCFG JCL that is presented. This creates the work data sets needed by the rest of the PARMGEN process.
Select option 4
Submit the job JCL. This will load the work data sets with additional members.
Select option 8
Edit the $GLB$USR member
From this menu, select option 2 first (You’ll only need to do this step this one time, even if yo make changes to the RTE later on so let’s do it now):
The $GLB$USR member of the WCONFIG work data set is the user copy (I.E yours) of the IBM supplied defaults member ($GBL$IBM) of the same data set. The IBM supplied member contains default data set names for data sets required by the RTE and also the data sets that will be used to receive things like started task JCL procs.
Th entries in the $GLB$USR member re all commented out but you can uncomment them and change them you your own site specific data set names as required. Typically I do not let PARMGEN override my production SYS1.PROCLIB or VTAM libraries and so change those settings to point at my own staging libraries from where I later manually copy the members as required to my live system libraries.
Edit the RTE member
Now we get to the bulk of the configuration for the RTE. Select option 1 from the menu above and you will be placed in an edit session for the RTE member name in the RTE’s WCONFIG data set.
In spite of all the parameters in the deck, you really only have to change a few initially at least.
You may want to review the communications protocols that the RTE will use. If any are enabled, they should also be enabled at the hub TEMS. Typically I remove the SNA option but your site will have it’s own requirements.
You may also need top change the default IP ports to be used by the environment but if possible, stick to using the defaults (1918 etc).
If this RTE is NOT a hub you will need to specify the ip address (in KDS_HUB_TCP_HOST) and port (in KDS_HUB_TCP_xxxx_PORT_NUM) of the hub.
Each product is configured within its own section in the RTe member so they are easy to find. Just go though the in turn (only configuring a couple of product to start with keeps this as simple as you can get it) and set anything that needs changing. for the most part you’ll only need to change things like VTAM node names and possibly started task JCL member names.
Since this is your first RTE, it’s probably going to be for a test system so if possible turn off (do not configure) any security to keep things a simple as possible. I would also suggest NOT using system variables initially, you can always change that later on.
Once you are done, PF3 to save and exit, then pf3 back to the main PARMGEN menu.
Select option 9
When not using system variables for an RTE, I always run step 9 to validate the RTE before proceeding. submit the job JCL that is presented. When the job completes, if it has any errors, you can see the validation report in WCONFIG($VALRPT) by entering 9S on the main menu (press enter to clear the job information that is initially displayed and go to the report).
The first part lists the input data sets and numbers them.
The second part (labeled section 1) lists any errors along with the input data set number and the line number within that input deck.
Make a not of any errors, then go back to step 8 and make changes to the appropriate members (probably only $GBL$USR or the RTE member), then rerun step 9 to validate the input again.
Select Option 10
The $PARSE step takes all your input and loads the interim work data sets with everything needs for the actual generation process which is coming up next.
Select Step 11
This brings up the submit menu. While you can run the jobs individually, it’s probably simplest initially, to just select the KCIJPSUB composite job and run them all in one go.
Deploy the RTE
If you changed the $GBL$USR member to specify staging libraries for data sets such as SYS1.PROCLIb and SYS1.VTAMLST then you can review the members in the staging libraries before copying the members you your systems live libraries.
At this point you should be able to start the stated tasks and have the environment come up on this system. Remember, this example assumes you are running PARMGEN on the SAME system that the RTE will execute on.
Now that your RTE us up and running, to make changes do the following:
From the PARMGEN main menu, Select option 8 and then select option 1 to edit the RTE member.
Make the changes you need to the RTE parameters.
Select option 9 from the PARMGEN main menu to validate the RTE. If anything is wrong, edit the RTE again and repeat.
If no errors, select option 10 to run the $PARSE step.
Then select option 11 (submit) to build the RTE. You can safely run the composite job again or you can run individual jobs if you know what the change you made affected.
Deploy any changed members in the staging libraries if using and restart the started tasks.
While the above may seem like a lot to read, it actually takes far less time to do than it does to read and once you have the RTE up and running, making changes is just a matter of changing the parameters in the RTE deck and regenerating everything.
The PARMGEN books have been updated. Information and links to the new books can be found here.
Like most tools , there is more than one way to use the ITM PARMGEN configuration tool. Over the past few months I’ve been working with it and have come up with a few ways to get the best out of it. The following are based on my experiences using it.
Keep PARMGEN up to date.
PARMGEN is a constantly evolving tool. APARs are currently being released on a regular basis to update its capabilities.
The technote lists currently available service and other important information and is here http://www-01.ibm.com/support/docview.wss?uid=swg21417935
You can subscribe to the technote and get notified when changes are available.
Do not make changes outside of PARMGEN
So far as is possible I would recommend that you do NOT make direct changes to run time components such as start up parameters in run time data sets. If you do make such changes, the next time you make a change via PARMGEN and deploy the modified environment, you will lose your external changes. If you have to make such external changes I would recommend you make a note of the change as a comment in the PARMGEN Run Time environment (RTE) parameter deck to remind you to redo the change after a PARMGEN change.
Make all Changes via PARMGEN
If something in the run time environment is not working, in general it’s because something is not correctly configured in the PARMGEN parameter members for the environment. The correct way to fix the issue it to go back to PARMGEN, update the run time’s parameter deck, then regen and redeploy the environment.
Put a z/OS hub in its own Run Time Environment
If you run your ITM hub TEMS on z/OS then you have two choices. A hub that is configured to run on a single LPAR, or what is known as a High Availability (HA) hub than can run on any configured LPAR in a Sysplex.
The disadvantage of configuring your hub to run on a single LPAR is that if the LPAR becomes unavailable you lose your entire ITM monitoring environment. With an HA hub, you can move it to another LPAR and have everything back up and running very quickly without any reconfiguration being required.
One of the requirements of configuring an HA hub in PARMGEN is that there must be no other agents configured in that run time environment.This is so that it can move from LPAR to LPAR. If you running a single LPAR hub now but think that you might want to convert to an HA hub later on, that task will be much easier if you setup the run time environments in PARMGEN now to anticipate that eventuality by creating a run time environment in PARMGEN that only has the hub component in it.
One RTE equals one LPAR
There is no hard and fast rule as to which agents and components you define to run within any given RTE but in general I have found that using a policy of one RTE equals one LPAR works best. There are a number of reasons for this:
- There’s far less work to run the jobs to gen everything in a single system.
- Since everything for the LPAR is in single parameter deck it’s much easier to be consistent with changes that affect all components in the environment, especially if you use system variables.
- You’ll use less disk space for run time data sets since many can now be shared by everything in the environment.
Use System Variables
Instead of hard coding the value everything in an RTE parameter deck, you can use symbolic name such as &SYSNAME. and &SYSCLONE. in many of the parameters. When you do this, run time parameter members and JCL decks are created with symbolic names that are resolved at execution time from the values defined on the run time system. The benefits of this are:
- Reduced maintenance. You can use a single common JCL deck for an address space in a common (to the Sysplex) JCL proclib. Data set names and parameters will resolve to values that are unique to the system the address space is executing on.
- You can use PARMGEN to create a copy (clone) of an RTE to run on another LPAR very easily and with minimal changes, thus reducing the time it takes to get an new RTE up and running and also improving reliability since fewer changes are required to the new RTE to configure it for the new LPAR.
Create a Model RTE
One of the powerful features of PARMGEN is the ability copy or clone an existing RTE into a new one. This saves you the effort of creating each RTE from scratch. When the mix of agents in each RTE differs from LPAR to LPAR then I have found it useful to create a minimal RTE that I use as the model for other RTEs. In my model RTE I have a remote TEMS, the base z/OS OMEGAMON components (product code OB) and the OMEGAMON XE on z/OS product. I also use system variables.
The best way to create model like this is to create an actual RTE with just those components in it and then gen and deploy it. Once you have it all working, connecting to the hub and data from the z/OS agent in the TEP and the E3270 User Interface, clone it into the model RTE. That way you know that any RTE you create from this model is based on a working example and should require minimal changes (especially if you are using system variables) to get it up and running.
To create a new RTE, I clone the model, add in any new agents and configure then in the RTE parameter deck. Then I gen the new RTE and deploy it. Using this technique, I have been able to create a complete working RTE with additional agents in it very quickly.
Build up an RTE in stages
Using the model approach described above, I have found the simplest way to create a working RTE with additional agents in it is to add them one at a time. Adding a new agent such as OMEGAMON XE for CICS on z/OS, configuring it, then generating, deploying and testing it can be done very quickly. This makes it much easier to diagnose and correct any problems with the new configuration and, since you are only adding one agent at a time you can concentrate on ensuring that it works correctly when deployed and that the existing elements continue to work as normal. It also has the benefit of allowing you to concentrate on each agent in turn as you add it which reduces the amount of information you need to research in order to set parameters.
Deploy Procs etc to ‘staging’ libraries
By default, PARMGEN is configured to write JCL procedures, VTAM node definitions and other components that need to be in ‘system’ libraries to the default system libraries such as SYS1.PROCLIB and SYS1.VTAMLST.
You probably do not want to do this. Instead I configure PARMGEN (via the $GBL$USR member in WCONFIG) to place these elements in my own ‘staging’ libraries. That way:
- I am not going to overwrite a ‘live’ element.
- I don’t need RACF authority to write to system libraries.
- I can check everything before copying it into the actual live libraries.
If you are using the TCP/IP protocol (defined in ITM as IP.PIPE, IP6.PIPE or IP6.SPIPE) as a transport protocol between your ITM components on z/OS then you may need to consider configuring TCP/IP to reserve specific ports for use by the ITM components in order to prevent other non ITM address spaces from acquiring them.
ITM TCP/IP port allocation algorithm
ITM and ITM agents use the following rules to allocate TCP/IP ports
- The Hub or Remote TEMS always uses the well known port, typically 1918.
- The agents then attempt to acquire ports in sequence using the algorithm well known port + (n*4096) until they either obtain a port or they run out of ports to try. The actual starting port and number of attempts can be controlled by the SKIP and COUNT parameters as described later.
For example, if the well known port assigned to the hub or remote TEMS is 1918, the first agent to start will attempt to obtain port 1918+(1*4096) or 6014. If that fails, it will attempt to obtain port 1918+(2*4096) or 10110 and so on.
The actual port assigned to any given agent for any given execution will vary based on the startup order of the agents and various other timing related factors but will always follow the above pattern.
Thus you can predict the port numbers that will be required by the ITM infrastructure on a given LPAR.
Using 1918 as a starting point, the following port numbers are potential candidates for use by ITM:
1918 – Always assigned to the hub or remote TEMS
Notice that this means you can have a maximum of one hub or remote TEMS and 15 agents on an LPAR.
What are the potential problems?
Unless you take specific action by configuring TCP/IP, there is nothing to prevent any other non ITM application on a z/OS LPAR from using any of these ports. Thus it is entirely possible that enough of these ports are in use by other applications that there are not enough for all the ITM agents, resulting in connectivity issues.
This type of problem can be difficult to diagnose since it may only occur randomly as it depends on the the unavailability of specific ports, and that will depend on which other application are running and the ports they have acquired.
What ports do I need to reserve?
If you are running for example, a remote TEMS and four agents on a z/OS LPAR then the environment will need as a minimum, the well known port plus four additional ports. The four additional ports do NOT have to be consecutive (using the list above). In the absence of the SKIP and COUNT parameters, each agent will try specific ports from the list (assuming 1918 as the base well known port) until it obtains a port. So you could quite validly reserve for example, ports 34686, 38782, 42878, 46974 for use by the four agent address spaces. In that case you might want to use the SKIP parameter to prevent agents from attempting to bind to the the first 7 ports in the list.
Remember, these examples are based on 1918 being the well known port. The actual values in use will change if you use a different port number for the well known port.
Reserving port in TCP/IP
The PROFILE DD of the TCP/IP started task JCL points to a dataset, or more typically a member of a dataset, that configures the TCP/IP environment. Within this configuration dataset or member you can use entries within the PORT statement to restrict specific ports to specific address spaces.
The port section might look like this:
7 TCP MISCSERV
7 UDP MISCSERV
9 TCP MISCSERV
9 UDP MISCSERV
19 TCP MISCSERV
19 UDP MISCSERV
20 TCP OMVS
Where the first value of each line is the port number, the second value is the protocol and the third value is the name of the address space that the port is limited to.
The critical port to reserve for ITM is the well known port which is typically 1918. So if your TEMS address space is named CANSDSST and the well known port is 1918 then you could add the following entry to the TCP/IP configuration deck PORTS section
1918 TCP CANSSDDT
This would prevent any other address space from obtaining port 1918.
Do I need to reserve ports for the agents?
The way that the allocation algorithm works allows you to have up to 15 agents running on an LPAR. If you only have a few agents on an LPAR then even if there are some ports from the potential list in use by non ITM applications, there are probably enough free ports from the list of potential ports to satisfy the needs of the agents.
However, if you have a lot of agents on an LPAR or if you want to guarantee that specific ports are available to the ITM agent address spaces then you may need to reserve specific ports to specific agents address spaces in order to ensure that they are available for the agent address spaces.
There are a couple of ways to do this:
- Configure TCP/IP to allow any agent to connect to any of the potential ports
- Configure each ITM agent to use a specific port and limit each port to a specific address space in the TCP/IP profile.
Configure TCP/IP to allow any agent to connect to any of the potential ports
The advantage of this method it that it only requires TCP/IP configuration and allows ITM to continue to dynamically assign ports to each agent based on availability and startup sequence. For example, if you had three agent started tasks CANSCICS, CANSMQ and CANSMFN, you would need to configure TCPIP to reserve all the potential ports for each address space as follows:
6014 TCP CANSCICS
6014 TCP CANSMQ
6014 TCP CANSMFN
10110 TCP CANSCICS
10110 TCP CANSMQ
10110 TCP CANSMFN
14206 TCP CANSCICS
14206 TCP CANSMQ
14206 TCP CANSMFN
etc all the way up to port 63358
Obviously as you add more agent address spaces you have to add more entries for each port.
Configure each ITM agent to use a specific port and limit each port to a specific address space in the TCP/IP profile.
This method uses a combination of ITM and TCP/IP configuration options to achieve the desired result. The advantage of this method is that the port assigned to each agent address space is predictable.
First, lets configure TCP/IP to reserve a single specific port for each agent address space:
6014 TCP CANSCICS
10110 TCP CANSMQ
14206 TCP CANSMFN
Now you need to configure each agent to only use a specific port. To do this you need to edit the KDE_TRANSPORT IP.PIPE (and/or IP6.PIPE or IP6.SPIPE if in use) entry of each agent’s KppENV member in RKANPARU as follows:
For the CICS agent address space CANSCICS:
KDE_TRANSPORT=IP.PIPE PORT:1918 USE:Y COUNT:1
For the MQ agent address space CANSMQ:
KDE_TRANSPORT=IP.PIPE PORT:1918 USE:Y COUNT:1 SKIP:1
For the MFN gent address space CANSMFN:
KDE_TRANSPORT=IP.PIPE PORT:1918 USE:Y COUNT:1 SKIP:2
The COUNT:1 parameter tells the agent address space to only try 1 port number from the potential list or ports.
The SKIP parameter tells the agent address space to skip that number of ports in the available list before trying to bind to the port. It is not required for the CICS agent (in this example) because the CICS agent address space will bind to the first port in the list.
Remember, these examples are based on the well know port number being 1918.
High Availability Hubs
A high availability hub is a hub TEMS address space that uses a DVIPA IP address to enable it to run on any candidate LPAR (one with the DVIPA address configured) within the sysplex. The location of the IP address moves with the hub address space so that as it moves from LPAR to LPAR, the agents and remote TEMS can connect to it, no matter where it is.
A hub TEMS uses the same well known port number as the rest of the ITM infrastructure (remote TEMS), however because a high availability hub connects to a different IP address (the DVIPA address) from that defined for the host LPAR, the high availability hub can execute on the same LPAR as a remote TEMS, even though both are using the same well known port.
However, if you have reserved the well known port on each LPAR for a remote TEMS address space, you must also reserve the same address for the high availability hub address space name.
So, in the PORT section of the TCPIP profile dataset or member you might have the following entry to reserve the well known port 1918 for the remote TEMS:
1918 TCP CANSSDDT
To this you would need to add (assuming the high availability hub address space name is CANSHAHB):
1918 TCP CANSHAHB
Both address spaces can bind to the same port because the high availability hub is using the DVIPA IP address.
Refreshing the TCPIP configuration
After making changes to the TCPIP profile dataset or member you can use the OBEYFILE command to cause the TCPIP address space on an LPAR to reload its configuration file.
Listing reserved ports
You can list the currently configured TCP/IP reserved ports using the TSO NETSTART PORTLIST command.
In TSO, issue the command:
NETSTAT PORTLIST [REP DSN dsn]
The REP DSN dsn option causes the command to write the output the to the dataset specified by the dsn operand.
Remember. Just because a port or address space name is defined in the TCPIP profile dataset or member does not mean it is actually defined to TCP/IP. You must restart TCPIP or issue the OBEYFILE command to refresh the TCPIP configuration.
Diagnosing Port Permissions Problems
In the agent or RTEMS RKLVLOG output you may see this sort of message:
(0017-D69321E3:kdebbbi.c,128,”KDEB_BaseBind”) Status 1DE00000=KDE1_STC_CANTBIND=2: EACCES
(0018-D69321E3:kdebbbi.c,132,”KDEB_BaseBind”) <0x29615378,0x10> bind: ASD 289FD7A0, status 1DE00000, errno 2
+0018 00000000 00022774 092A2E16 00000000 00000000 …………….
- KDE1_STC_CANTBIND indicates a bind failure.
- EACCES indicates the bind failed because of a permissions issue. The address space is not authorized to bind to the port.
- The port number (in hex) is the lower four digits of the word indicated in the message above. In this case 2774 (hex) is port 10100 decimal.
Use the NETSTAT PORTLIST command to determine if the address space is authorized to access the port, for example:
NETSTAT PORTLIST (PORT 10100
The (PORT portnumber operand limits the output to just the requested port.
A note about EACCES issues.
My research indicates that an EACCES return code is returned when the application first attempts to use a port, NOT when it is allocated. What this means for ITM is that the port allocation algorithm may select a free port that is actually restricted by TCP/IP. However ITM will not find out about this until it tries to bind to the port. At that point the port allocation process is complete so you end up with an assigned port that you cannot use.
In this instance I believe the only solution is to use the COUNT and SKIP parameters to force agents to only use ports in specific ranges and to avoid any reserved port ranges. You don’t have to assign a specific port to each agent, you can still let them pick from a range based on first available but if you have for example, a range of reserved ports in the middle of the normally free range of ports, you may need to configure some agents to only use ports below that reserved range and others to use the ports above the range.
The actual configuration you need to use to make the agents avoid the reserved range or ranges will depend on the number of agents you are running on the system and where in the list of normally free ports, the reserved range or ranges are.