Archive

Posts Tagged ‘IBM’

PARMGEN Strategies…

March 13, 2013 Leave a comment

Like most tools , there is more than one way to use the ITM PARMGEN configuration tool. Over the past few months I’ve been working with it and have come up with a few ways to get the best out of it.  The following are based on my experiences using it.

Keep PARMGEN up to date.

PARMGEN is a constantly evolving tool. APARs are currently being released on a regular basis to update its capabilities.

The technote lists currently available service and other important information and is here http://www-01.ibm.com/support/docview.wss?uid=swg21417935

You can subscribe to the technote and get notified when changes are available.

Do not make changes outside of PARMGEN

So far as is possible I would recommend that you do NOT make direct changes to run time components such as start up parameters in run time data sets. If you do make such changes, the next time you make a change via PARMGEN and deploy the modified environment, you will lose your external changes. If you have to make such external changes I would recommend you make a note of the change as a comment in the PARMGEN Run Time environment (RTE) parameter deck to remind you to redo the change after a PARMGEN change.

Make all Changes via PARMGEN

If something in the run time environment is not working, in general it’s because something is not correctly configured in the PARMGEN parameter members for the environment. The correct way to fix the issue it to go back to PARMGEN, update the run time’s parameter deck, then regen and redeploy the environment.

Put a z/OS hub in its own Run Time Environment

If you run your ITM hub TEMS on z/OS then you have two choices. A hub that is configured to run on a single LPAR, or what is known as a High Availability (HA) hub than can run on any configured LPAR in a Sysplex.

The disadvantage of configuring your hub to run on a single LPAR is that if the LPAR becomes unavailable you lose your entire ITM monitoring environment. With an HA hub, you can move it to another LPAR and have everything back up and running very quickly without any reconfiguration being required.

One of the requirements of configuring an HA hub in PARMGEN is that there must be no other agents configured in that run time environment.This is so that it can move from LPAR to LPAR. If you running a single LPAR hub now but think that you might want to convert to an HA hub later on, that task will be much easier if you setup the run time environments in PARMGEN now to anticipate that eventuality by creating a run time environment in PARMGEN that only has the hub component in it.

One RTE equals one LPAR

There is no hard and fast rule as to which agents and components you define to run within any given RTE but in general I have found that using a policy of one RTE equals one LPAR works best. There are a number of reasons for this:

  • There’s far less work to run the jobs to gen everything in a single system.
  • Since everything for the LPAR is in  single parameter deck it’s much easier to be consistent with changes that affect all components in the environment, especially if you use system variables.
  • You’ll use less disk space for run time data sets since many can now be shared by everything in the environment.

Use System Variables

Instead of hard coding the value everything in an RTE parameter deck, you can use symbolic name such as &SYSNAME. and &SYSCLONE. in many of the parameters. When you do this, run time parameter members and JCL decks are created with symbolic names that are resolved at execution time from the values defined on the run time system. The benefits of this are:

  • Reduced maintenance. You can use a single common JCL deck for an address space in a common (to the Sysplex) JCL proclib. Data set names and parameters will resolve to values that are unique to the system the address space is executing on.
  • You can use PARMGEN to create a copy (clone) of an RTE to run on another LPAR very easily and with minimal changes, thus reducing the time it takes to get an new RTE up and running and also improving reliability since fewer changes are required to the new RTE to configure it for the new LPAR.

Create a Model RTE

One of the powerful features of PARMGEN is the ability copy or clone an existing RTE into a new one. This saves you the effort of creating each RTE from scratch. When the mix of agents in each RTE differs from LPAR to LPAR then I have found it useful to create a minimal RTE that I use as the model for other RTEs. In my model RTE I have a remote TEMS, the base z/OS OMEGAMON components (product code OB) and the OMEGAMON XE on z/OS product. I also use system variables.

The best way to create model like this is to create an actual RTE with just those components in it and then gen and deploy it. Once you have it all working, connecting to the hub and data from the z/OS agent in the TEP and the E3270 User Interface, clone it into the model RTE. That way you know that any RTE you create from this model is based on a working example and should require minimal changes (especially if you are using system variables) to get it up and running.

To create a new RTE, I clone the model, add in any new agents and configure then in the RTE parameter deck. Then I gen the new RTE and deploy it. Using this technique, I have been able to create a complete working RTE with additional agents in it very quickly.

Build up an RTE in stages

Using the model approach described above, I have found the simplest way to create a working RTE with additional agents in it is to add them one at a time. Adding a new agent such as OMEGAMON XE for CICS on z/OS, configuring it, then generating, deploying and testing it can be done very quickly. This makes it much easier to diagnose and correct any problems with the new configuration and, since you are only adding one agent at a time you can concentrate on ensuring that it works correctly when deployed and that the existing elements continue to work as normal. It also has the benefit of allowing you to concentrate on each agent in turn as you add it which reduces the amount of information you need to research in order to set parameters.

Deploy Procs etc to ‘staging’ libraries

By default, PARMGEN is configured to write JCL procedures, VTAM node definitions and other components that need to be in ‘system’ libraries to the default system libraries such as SYS1.PROCLIB and SYS1.VTAMLST.

You probably do not want to do this. Instead I configure PARMGEN (via the $GBL$USR member in WCONFIG) to place these elements in my own ‘staging’ libraries. That way:

  • I am not going to overwrite a ‘live’ element.
  • I don’t need RACF authority to write to system libraries.
  • I can check everything before copying it into the actual live libraries.

 

 

 

Advertisements
Categories: ITM Tags: , , ,

The case of the wrong case…

September 18, 2011 Leave a comment

This has been a bit of a detective story so I thought I’d mention it here in case it helps anyone.

As part of my efforts to move my web based ISPF interface to the stand alone IBM HTTP server I needed to run the REXX execs under USS. Now to be honest I rarely ever touch USS on z/OS but that’s where I needed to be so that’s where I am.

I typically write my REXX execs in lower case so I might end up with one exec invoking another with something like this:

data=myfunc(parm1,parm2,etc)

Where myfunc is another REXX exec in the standard search order (SYSPROC and SYSEXEC typically). This works fine because even though the function name is written in lower case, REXX converts it to upper case and that matches the member name in the library just fine. Everything works great.

BUT! Move over to USS land and things are not so simple. REXX in USS land is case sensitive. Using the example above, I had created the ‘myfunc’ exec file  in USS with a lower case file name and was surprised when the calling exec could not find it.

Eventually (after two days) I found that by default, REXX makes function names coded like this into UPPER CASE before searching for them (I knew this, I had just forgotten about it) so REXX was searching for a file called ‘MYFUNC’ whilst the file I had created was called ‘myfunc’. Not the same animal in a case sensitive environment.

I could make all my exec file names upper case to address this, but in the event you need to call a lower case function name you can code it like this:

data='myfunc'(parm1,parm2,etc)

And amazingly it will now find the lower case exec file.

ISPF on the web – Version 2

September 15, 2011 2 comments

Just a quick update, more details and info to follow soon.

My initial iteration of ISPF on the web required the user to still log on to TSO/ISPF on a 3270 and then start the web server inside their TSO/ISPF session before accessing it via a browser.

Whilst this was an interesting exercise and learning experience, obviously it’s about as useful as a chocolate fireguard in practice.

Hence the move to version 2!

The UI experience in the browser is the same but the back end is significantly different, running inside the IBM stand alone HTTP web server.

I am still developing this in my spare time, but as soon as I get a demo up and running I shall be adding more information so stay tuned.

ISPF on the web – Demo

April 19, 2010 Leave a comment

It’s taken a little bit of experimentation but I’ve come up with a basic drag and drop interface. So far there’s no back end interaction so all the interface does is pop up an alert to display the properties of what was dropped. There’s also a lot of ‘hard coded’ stuff to make things happen, but at this point but I’m more interested in the interface itself than the how.

On the right I’ve got a little tool box containing a trash can and a JES card reader (or at least the best impression I can come up with of one). By dragging the icons next to the files and data sets or PDS members onto the tools the idea is that you’d get the appropriate action. I had to use icons rather than the whole dataset name because the length of the dataset name meant I was getting a lot of ‘misses’ when dropping the elements onto the tools due to the way the jQuery UI drag and drop functionality works. Limiting it to the icons just made things a lot easier to use.

Oh, and although I didn’t demo it, if you scroll the page up and down, the toolbox stays in the same location so it’s always there to interact with.

The next steps are:

  • Add some front end code to the trash can to confirm you actually want to delete the item and then send the request to the back end
  • Add some code to send the job submission request to the back end where the REXX will have to try to confirm that this is actual JCL you are submitting, or at least that it has a job card.

This is actually pretty fun stuff to do!

More progress with ISPF on the web…

April 16, 2010 Leave a comment

Well, it’s been a bit of a battle but I’ve managed to get the basic dataset member and contents list function working to my satisfaction.

I decided to use the jQuery UI dialog function to create dialog boxes to display member lists and member and sequential dataset contents. I hit a few road blocks along the way, not least of which was propagating the capture of links so that I could get the data using jQuery Ajax functions.

One thing I found was that when you return a fragment of a page as a result of an ajax call, you can include fragments of javascript and they will be executed. The problem I had was that while this worked fine for the dataset list form (using the form plugin) it didn’t work when I intercepted the links myself.

Turns out that the jQuery Ajax functionality ($.get) executes the script before it invokes my callback routine to process the data. Since my callback has not run yet, the elements the script wants to manipulate (in the returned data) are not there yet so nothing worked!

In the end I managed to come up with a mechanism that propagated the link activity without having to embed script into the fragments but it was a pain to do.

One benefit though is that I am able to resize the dialog pop ups if need be inside that code. The best part was trying to find out if the pop up had scroll bars or not. Why is that so hard to do? Although in the end I found a really nice example of some code that I was able to modify to my needs.

Anyway, enough talk, here’s a couple of screen shots. The first shows multiple dialogs open with dataset member lists and contents.

The second screen shot shows that you can switch tabs and the dialogs remain open. Imagine this was another ISPF type function. Now you can mix and match the pop ups.

Because I am displaying info in pop up dialogs on the page, now I can start to think about drag and drop. For example I could drag a member from one member list to another to move or copy a member from one dataset to another.

Or I could copy a member or a dataset (from the dataset list) even onto a trash can icon to delete it.

Another thought I had was to have pre built jobs. Drag a job member onto a reader icon and it submits the job.

See, this stuff can be really REALLY cool!

EGL

January 9, 2010 Leave a comment

Enterprise Generation Language (EGL) is a high level programming language from IBM designed to be platform independent. Instead of having to learn multiple languages like Javascript and Java and how to organize and deploy them, you just develop in EGL and let the Eclipse based IDE tools handle details the deployment (for the most part anyway).

Being a ‘low level’ type of guy, these high level tools don’t sit well with me and I have to admit that I have struggled to get started with it. But! There is hope. I finally managed to create a simple web application using it today. Next step, consume some sort of web service.

Categories: Coding, Development Tools Tags:

Redbook

January 6, 2010 Leave a comment

I’m off doing a Redbook for IBM shortly so my current iPhone work is going to be taking a back seat for a few weeks while I get up to speed on EGL amongst other things.

Normally these things are residencies and as much as I’d like to spend four weeks in mid winter in New York (not really), pressure of work means I will be working on it remotely. Still, should be a fun and interesting experience.

Categories: General Stuff Tags: , , ,