As previously mentioned I am on the System z boot camp course right now. In order to gain this qualification, the main thing I have been given to do is to read and learn the entire book, "Introduction to the New Mainframe: z/OS Basics". This can be freely downloaded in PDF format from that link. You can also buy it in dead tree format. The latter is more useful, even though it's a monstrous, verbose, dry 500 pages and the images are black and white.
As part of my revision for this test I took notes. Since I think other people are likely to take this test after me, and others still are possibly just interested in the central concepts of System z, here they are.
- Mainframe staff roles
- Mainframe hardware
- Operating systems
- Storage / memory
- Data storage
- IPL (Initial Program Load)
- Interaction with z/OS
- JCL (Job control Language)
- Workload Management
- Finally, Here Is How z/OS Actually Works
- Application Design
- Programming for z/OS
- Compiling and link-editing a program on z/OS
- Transaction management
- IMS (Information Management System)
- IMS Database Manager
- z/OS HTTP Server
- WebSphere Application Server on z/OS
- Messaging and Queuing
- z/OS System Programming
- Where z/OS looks for a program when it's requested through a system service, in sequence:
- Changes are strictly controlled by disciplined procedures and audit:
- SMP/E (System Modification Program/Extended)
- z/OS Utility Programs
- About the mastery test
Mainframe staff roles
- System Programmer (SysProg)
- Installs, customises and maintains (upgrades) the operating system and middleware; reads dumps
- System Administrator (SysAdmin)
- Installs software, day-to-day maintenance, user/security management, data/storage management
- Application Designer, Application Programmer
- Design, program and maintain applications
- System Operator (SysOp)
- Monitors and controls mainframe hardware and software. Starts/stops system tasks and the machine itself
- Production Control Analyst
- Manages batch jobs, code that is in production
- Hardware support, software support, client representative
The physical box is called the Central Processor Complex or CPC. The box contains channels, processors and memory.
Support Element (SE)
The SE is simply a ThinkPad mounted inside the CPC.
Communication between memory and processors is handled by specialised controllers running firmware.
Communication between processors and channels is handled by specialised controllers running firmware.
It is the SE which controls these specialised controllers.
The SE can in turn be remotely controlled from a Hardware Management Console (HMC) (a PC elsewhere, connected to the SE over a private LAN).
An HMC can be used to control multiple SEs in separate CPCs.
Starting up a mainframe for the first time is handled by the SE, and is called Initial Program Load (IPL).
The CPC contains about 1000 physical I/O channels. Channels used to be copper parallel (i.e. one-way) but are now two-way ESCON (Enterprise Systems CONnection) and FICON (FIber CONnection). Each channel has a unique Physical CHannel IDentifier or PCHID.
Channels are controlled by a channel subsystem.
A channel connects an adapter on the CPC to a channel adapter on a control unit, either directly, or through one or more directors (switches).
A control unit can be connected by more than one channel simultaneously. Each control unit has a control unit number, and controls one or more devices, such as physical drives and communication devices (e.g. LAN adapters).
Each device also has a number. Thus, a combination of channel number, control unit number and device number is an address or device number which uniquely specifies a device. A device can have multiple addresses due to redundant channels.
Processing Units (PUs)
The CPC also contains 1 to 4 books, each of which contains 12 to 16 processors and a ton of real memory.
A processor (also known as an engine and various other names) can be characterised (by altering the microcode running on them) in various ways...
- CP (Central Processor)
- Can do anything and everything, most notably, z/OS.
- SAP (System Assistance Processor)
- Every mainframe has at least one of these: it executes Licensed Internal Code (IBM's private internal code) to drive the I/O subsystem.
- IFL (Integrated Facility for Linux)
- Only used by Linux LPARs (see below) and virtual systems.
- zAAP (z Application Assist Processor)
- zIIP (z Integrated Information Processor)
- DB2 (see later) and IPSec.
- ICF (Integrated Coupling Facility)
- More Licensed Internal Code, this time for Coupling Facility (see below) Control Code.
...or it can be uncharacterised (spare), but ready to take over in the event of a failure.
IBM charges a LOT more for processors which run z/OS. zAAps, zIIPs and so on exist just save MONEY, they do not improve PERFORMANCE.
CPs can also run under capacity to save more money.
Capacity On Demand allows for temporary increases in processor capacity.
In a Customer-Initiated Upgrade (CIU) a customer permanently "downloads" (unlocks) additional memory or processing capacity.
Logical partitioning of the hardware
A System z mainframe can be unified, or it can be divided up into logical partitions (LPARs).
Logical partitioning is handled by the Type 1 hypervisor which comes with the standard Processor Resource/Systems Manager (PR/SM) hardware feature on all mainframes.
The SE contains the system profile and input/output control data set (IOCDS) which says how LPARs are configured on the machine.
Each LPAR has DEDICATED memory, but they can share the use of processors and I/O channels.
Each LPAR has the use of logical processors and logical I/O channels (CHannel Path IDentifiers, CHPIDs) which correspond in various ways to physical processors and PCHIDs.
Each logical channel subsystem can support up to 256 channels. Each logical channel has a channel number (00h to FFh). Note that this tops out at 256 whereas there can be up to ~1000 physical channels.
Each LPAR supports an independent OS loaded by a separate IPL operation.
Each LPAR has a percentage priority.
Clustering of hardware (Sysplexes)
A parallel sysplex (systems complex) is basically a mainframe cluster.
The key to synchronising work across multiple CPCs is the Time-Of-Day (TOD) clock, which is synchronised with a separate piece of hardware called an External Time Reference (ETR) or Sysplex Timer, using a protocol called Server Time Protocol (STP) which is implemented in Licensed Internal Code.
The Coupling Facility enables centrally accessible, high performance data sharing within a sysplex. A CF can be a separate, standalone machine, or contained in an LPAR using the special ICF engine type.
A Geographically Dispersed Parallel Sysplex can be up to 100km apart (ish).
The OS is referred to as the Base Control Program (BCP).
We deal here mainly with z/OS but there are others:
- z/VSE (Virtual Storage Extended)
- z/OS's older brother, originally simply called "DOS". Originally a stop-gap solution during the wait for System/360, but people still use it.
- Linux for System z
- Doesn't support 3270 terminals, which are most commonly used to interact with z/OS. Uses ASCII, unlike z/OS which uses EBCDIC.
- z/TPF (Transaction Processing Facility)
- Formerly Airline Control Program (ACP). Special-purpose OS dedicated for transaction processing.
- z/VM (Virtual Machine)
Type 2 hypervisor virtualisation. Consists of a control program (CP) and a single-user Conversational Monitoring System (CMS). More users => more CMSes. Can run anything else in this list, in a virtual machine. Shares real resources among virtualised "guest systems" which believe they all have dedicated access to said resources.
NOTE: in a virtual machine, unlike an LPAR, the ENTIRE system is virtualised: there can theoretically be more virtual processors than actual ones; virtual RAM is shared among all the images inside the z/VM LPAR; virtual devices correspond to real devices but may be named differently.
z/VM is more complex to use than simple logical partitioning, but it also allows for potentially hundreds of VMs to run simultaneously.
z/OS is made up of modules/routines, which are in turn made of instructions and macros (instruction groups).
z/OS stores information in control blocks in memory, which come in the following varieties:
- There is one of these per z/OS system, contains system-wide information
- One per resource (processor, storage device, ...)
- One per currently executing job
- One per task (each job can consist of multiple tasks)
The internal structure and contents of a control block are publicly available so they serve as vehicles for communication of data, flags and addresses.
z/OS executes individual tasks based on priority and ability to execute. It then loads program instructions and data into memory while executing.
Storage / memory
All programs have free usage of an amount of memory. Memory locations are accessed using a numerical address. z/OS is a 64-bit operating system which means that the memory address is (at most) a 64-bit number. This means the address space (set of all addressable memory locations) is (at most) 264 bits or 16 EiB (exbibytes) in size. The address is divided into:
|0 to 10||Region First indeX||211 = 2048 region first tables of 8 PiB each|
|11 to 21||Region Second indeX||211 = 2048 region second tables of 4 TiB each|
|22 to 32||Region Third indeX||211 = 2048 region third tables of 2GiB each|
|33 to 43||Segment indeX||211 = 2048 segments of 1MiB each|
|44 to 51||Page indeX||28 = 256 pages of 4kiB each|
|52 to 63||Byte indeX||212 = 4096 bytes|
Each program/user has its own virtual address space with an address space identifier (ASID). This corresponds roughly with the concept of a process and a process ID on Unix, though each address space can start multiple tasks one after the other.
No system actually has this much memory, and CERTAINLY not this much memory for each job and user. Dynamic Address Translation takes an ASID and a memory location in that virtual address space, and translates them into a real memory address.
Paging and Swapping
z machines have lots of RAM:
- central storage or real memory
- accessed synchronously
- very fast
- controlled by the Real Storage Manager
- divided into frames, each of which can hold one page (4kiB) of virtual storage
but can also use disk drives for the same purpose:
- auxiliary storage or swap space
- very slow
- controlled by the auxiliary storage manager
- divided into slots, each of which can hold one page (4kiB) of virtual storage.
The active pages of a program are stored in central storage. The inactive pages of the program, and all of its data, are stored in a page data set in auxiliary storage.
When a program attempts to access a memory location which (according to DAT) is currently in auxiliary storage, a page fault interrupt occurs and the page containing that piece of memory is moved (paged in) from auxiliary storage into central storage. Paging in also occurs when a program is loaded into memory for the first time.
If there are no free frames in central storage, then an existing frame is moved (paged out) to auxiliary storage first.
z/OS actually maintains a supply of free frames, and pages frames out to auxiliary storage when this supply becomes low. This is called page stealing. Pages can be fixed to prevent them from being stolen, but otherwise, it works like this (this is all handled by the RSM):
- Each frame has a reference bit. This is set to 0 initially but turns to 1 when the frame is referenced.
- Each frame also has an unreferenced interval count. This is set to 0 initially.
Every so often, z/OS checks each frame for reference bits.
- If the bit is 0, the frame still has not been referenced. The unreferenced interval count is increased.
- If the bit is 1, the frame has been referenced. The reference bit and unreferenced interval count are both reset to 0.
- Frames with high unreferenced interval counts are the most likely to be stolen.
Swapping is paging an entire address space at once. This is performed by the System Resource Manager (SRM) when the Workload Manager asks it to.
Division of memory
Storage can be divided up in the following ways:
Numerically. z/OS was originally a 24-bit operating system, only able to address space below 16MiB, known as "the Line". As time passed it became a 31-bit operating system - storage between 16MiB and the 2GiB limit is called "above the Line". Now it is 64-bit. The 2GiB limit is called "the Bar".
Because of these changes, every z/OS program has a residency mode (RMODE) and an addressing mode (AMODE).
RMODE determines where in virtual storage the program's code must reside (be stored) - 24, 31 or 64.
AMODE determines what range of virtual storage the program can address as a whole - this includes both program code and data - 24, 31 or 64.
AMODE is always greater than or equal to RMODE. (After all, how could you have program code in a range the program can't address?)
All address spaces are distinct and specific to the user/job to which they belong. There is an exception: the Common Storage Area which all address spaces share. DAT always translates virtual addresses in each address space's Common Area to the same real address. Among other things, z/OS is stored here and address spaces are themselves represented by an Address Space Control Block (ASCB) here. In order to prevent programs from interfering with one another, we use:
Each program request to read or modify the contents of a frame of storage has a 4-bit key with it.
Each frame of central storage has a 4-bit storage protect key and a single fetch protect bit.
A program can't modify a frame, or fetch a fetch-protected frame, unless the keys match, or the program's key is 0000b.
Key 0000b is reserved for small portions of the BCP.
Keys 0001b to 0100b (1d to 7d) are reserved for z/OS, various subsystems and middleware products.
Key 1000b (8d) is for users using private storage (this is okay because they cannot access each other's address spaces). i.e. MOST USERS.
Keys 1001b to 1111b (9d to 15d) are for bizarro users who are running right there in central storage.
The majority of storage is the User Region which programs and users can freely use, but some of it is reserved for the use of z/OS.
Named Storage Areas
- PSA (Prefixed Save Area) or "Low Core"
- Common to all programs running on the same processor (differs between processors). Note that any program may theoretically execute on any processor.
- System region
- Very small area reserved for use by the region control task of each address space.
- Key 0d, read-only area, where operating system control programs reside.
- SQA (System Queue Area)
- Permanently page-fixed key 0d area containing more system-level data.
- LPA (Link Pack Area)
System-level programs commonly used by many address spaces are kept here where they can be shared.
- FLPA (Fixed Link Pack Area)
- Everything from SYS1.PARMLIB(IEAFIXxx). Fixed to prevent page stealing. The other areas have to be referenced continually to avoid being stolen.
- PLPA (Pageable Link Pack Area)
- Read-only programs. Everything from SYS1.LPALIB, and SYS1.PARMLIB(LPALSTxx).
- MLPA (Modified Link Pack Area)
- An extension to the PLPA which supersedes it. Use this to temporarily modify or update the PLPA with new or replacement modules.
- "Extended ____", e.g. Extended SQA, Extended Nucleus
- More of ____, only above the Line instead of below it
Storage is grouped into 256 pools according these properties (pageability, protect key, etc.) by the Virtual Storage Manager (VSM).
Notably absent from the CPC hardware configuration are any storage devices. Tape drives and hard disks are external to the mainframe, they are referred to collectively as DASD (Direct Access Storage Devices) and they are linked to via channels.
The standard disk control unit is an "IBM 3990" and the standard disk unit is an "IBM 3390". These no longer really exist, but are instead emulated in bulk by actual modern disk devices.
Since a control unit can have multiple connected channels it is very simple to share a disk between multiple LPARs. This method is clunky though. A more efficient solution is to connect all of the LPARs' logical channel subsystems to one another in a channel-to-channel (CTC) ring. This enables them to share information. This is the ground level in building a sysplex.
A volume is divided into cylinders of varying size depending on the device.
A cylinder is divided into tracks of varying size depending on the device. On a 3390 device a track is 56,664 bytes.
Tracks are divided into blocks of varying and even variable size depending on whatever we like and however we choose to assign space on the volume. Blocks are uniquely addressed; they are the unit of record on the disk (not the bit, as you'd think).
Blocks are of course divided into bytes and bits.
Instead of files, z/OS has data sets. Unlike byte stream files found on Linux or Windows, z/OS data sets are almost always subdivided into records. This is called a "Record File System" (RFS).
A record is a fixed number of bytes containing data. A record cannot span multiple volumes.
A data set is always stored beginning on a track boundary.
Each data set has a name, consisting of a series of qualifiers separated by dots.
- Each qualifier is 1 to 8 alphanumeric (or -$#@) characters long beginning with a letter (or $#@).
- The first qualifier is the High-Level Qualifier or HLQ.
- The total length of the data set name is 44 characters at most, including dots.
- This is the closest thing there is to a heirarchy in z/OS. There are no directories or subdirectories.
Each DASD volume has volume label of six characters (e.g. TEST01), stored at track 0 of cylinder 0, which identifies the volume and the location of its VTOC (Volume Table Of Contents). The VTOC lists the data sets (and space) that the volume has on it. Both VTOC and label are created by the ICKDSF utility. A data set name must be unique to a DASD volume, which means that a data set may be uniquely identified and located using a device type, a DASD volume serial and a data set name.
You do not have to do it this way, however. Data sets are almost always cataloged. Cataloged data sets can be located by name alone without knowing their precise location. The Master Catalog is a data set which usually contains only HLQs and pointers (aliases) to specific User Catalogs. These in turn catalog the physical locations of data sets beginning with specific HLQs. So, logically,
- the Master Catalog's location must ALWAYS be known and
- it should be backed up with uncommon thoroughness just in case of corruption.
Data sets can be allocated using
- Access Method services (see later)
- the ALLOCATE command in TSO (see later)
- Utility programs IDCAMS or DFSMS - SMS being the very important Storage Management Subsystem
- JCL commands (see later)
Data Set Attributes
A data set is initially organised into a single primary extent on one volume. If it grows too large it can take up a limited number of secondary extents which may be on different volumes. Multiple extents on one volume impairs performance, clearly. If you run out of space entirely, you get a SYSTEM ABEND '0D37' and either have to compress the data set or allocate a new, bigger one as a replacement.
Each data set has a RECord ForMat (RECFM) which determines the relation between logical records (of length LRECL) and physical blocks (of length BLKSIZE) on disk.
- F = Fixed. One block = one record. Blocks are always the same fixed size, so records are always the same fixed size as the block. Seldom used.
- V = Variable. One block = one record, but record size is variable and so therefore is block size. (A 4-byte Record Descriptor Word (RDW) at the beginning of each record includes record length.) Seldom used.
- FB = Fixed Blocked. LRECL fixed. BLKSIZE also still fixed, but is some integer multiple larger, so that each block contains a multiple of (N) records.
- VB = Variable Blocked. LRECL varies. BLKSIZE varies, but is generally larger, so each block usually contains several records. Each record still has an RDW, but each block also now has a BDW for its length.
- U = Undefined. No clear structure. Used usually only for executables.
Depending on RECFM, obviously LRECL and BLKSIZE may need to be specified explicitly or not.
There is a gap between blocks (Inter-Block Gap) on disk, but there is not a gap between records within a block.
An access method is a set of macros and programs to define and access a family of data set types. Multiple access methods can access multiple types of data set.
Data Set Types
- Sequential data set
Basic Sequential Access Method (BSAM)
Queued Sequential Access Method (QSAM)
To retrieve the 10th record from a sequential data set, you must first pass the preceding 9 records.
Tape drives can only store this kind of data set.
- Partitioned Data Set (PDS) or "library"
Basic Partitioned Access Method (BPAM)
Consists of two data sets, a directory and its members. Each member has a simple name. The directory is loaded sequentially, but is small. It is then used to non-sequentially access a member.
Most system data sets are PDSes.
PDSes do not automatically recover lost space when a member is updated or deleted; they must be regularly "compressed".
Members of PDSes can themselves be used as sequential data sets. This is useful if you have a LOT of very small (<< 1 track) data sets to store, or want a handy way to collect a lot of SDSes.
PDSes can be concatenated to form large libraries.
- PDSE (Partitioned Data Set Extended)
Like a PDS but automatically reuses lost space i.e. no need for compression.
- Virtual Storage Access Method (VSAM) data sets
VSAM data sets are always variable-record-length and RECFM and BLKSIZE do not apply. They consist of index records and Control Areas (CAs). Each CA contains multiple Control Intervals (CI) and each CI contains data records, unused space, record descriptor fields (RDFs) and a single CI descriptor field (CIDF).
- Linear Data Set (LDS)
- Entry Sequence Data Set (ESDS)
- Relative Record Data Set (RRDS)
- records have numbers for random retrieval
- KSDS (Key-Sequenced Data Set)
- Records are stored with multiple keys so they can be accessed non-sequentially
- zFS (z/OS File System)
- VSAM data set for use with z/OS Unix System Services (see later). Supports all modern Unix conveniences (Access Control Lists etc.)
The following are all PDSes.
Contains system execution modules that are loaded into the shared Link Pack Area when the system is initialized.
Contains control parameters for z/OS. Lots of them in one place.
Contains members of the form SYS1.PARMLIB(LPALSTxx) which can be filled with the names of additional data sets which you wish to concatenate on the end of the list in SYS1.LPALIB.
SYS1.PARMLIB(CONSOLxx) defines which I/O devices can be used as consoles to control z/OS.
SYS1.PARMLIB(IEASYMxx) also allows you to specify system symbols (environment variables by any other name) for multiple systems. System symbols can be defined using string operations on other system symbols. They have a name beginning with a "&" and ending with a ".". Some are statically defined at IPL, others are modifiable.
Contains cataloged JCL procedures provided with z/OS.
Among these are helpful scripts for compiling (and linking (and running)) programs written in various languages, including COBOL and the JES2 cataloged procedure.
This is automatically searched whenever you refer to another JCL script in a JCL script (i.e. it's like an invisible JOBLIB DD statement).
Contains many of the basic execution modules of the system.
This is where programs are looked for when you call one from a JCL script using EXEC.
- Contains the basic supervisor modules ("kernel") of z/OS.
- Contains Supervisor Calls (SVCs).
IPL (Initial Program Load)
This is the act of loading a copy of the operating system from disk into storage, and then executing it. This is the System z equivalent to booting a desktop PC. IPLs can be cold, quick or warm.
- The process begins when a sysprog at a Hardware Management Console selects the LOAD function.
- Not all disks have loadable code on them. Those which do are SYSRES volumes, and are "IPLable". Each has a bootstrap module at cylinder 0, track 0 which is loaded into central storage at real address zero, then control is passed to it.
- The bootstrap reads the IPL control program IEAIPL00 and passes control to it.
- IEAIPL00 zeroes central storage, defines storage areas for the Master Scheduler and then locates the special SYS1.NUCLEUS data set on the SYSRES volume. SYS1.NUCLEUS contains the basic supervisor modules ("kernel") of z/OS. From these, IEAIPL00 loads IPL Resource Initialization Modules.
These IRIMs in turn load the operating system environment of control blocks and subsystems:
- get the IODF from the volume specified in the LOAD command
- loads the rest of the nucleus
- initialize the SQA, ESQA, LSQA and PSA in virtual storage
- initialize real storage management including segment tables
- load the Nucleus Initialization Program (NIP)
- The NIP invokes a variety of Resource Initialization Modules (RIMs) which basically set up the rest of the named areas of virtual storage: SQA, PLPA, FLPA, MLPA, CSA.
The Master Scheduler next starts the "system log and communications" task, then itself, then the Job Entry Subsystem, then starts other subsystems as specified in SYS1.PARMLIB(IEFSSNxx). The Job Entry Subsystem (JES) is what allows new jobs/subsystems (equivalent to Unix processes) to be started. JES handles job scripts that are submitted to it in Job Control Language (JCL).
The Master Scheduler Subsystem is loaded from the "master JCL" load module, SYS1.LINKLIB(MSTJCL00). Want to load from a different module? Go in SYS1.PARMLIB(IEASYSxx) and change the value of "MSTRJCL" from 00 to something else.
It runs in an address space called "*MASTER*". Because this is the first subsystem which starts at IPL, it has an ASID of 1.
What about shutdown? Shutdown commands are also entirely automated except for the automated task itself which must be stopped manually.
Hardware Configuration Definition
This component consolidates hardware and software I/O configuration under one interface and outputs one (or more!) I/O definition files (IODFs).
New IDOFs can be activated without IPL.
The Hardware Configuration Manager is a GUI for the HCD.
Interaction with z/OS
The HCD defines, among other things, which I/O devices can be used as consoles to control z/OS.
Consoles come in three broad categories:
- Multiple Console Support consoles are connected locally. No network. No SNA.
- SNA Multiple Console Support consoles can connect remotely via z/OS Communications Server. (?)
- Extended Multiple Console Support consoles are everything else. (?)
Consoles are mainly defined in terms of the commands they're permitted to accept and the messages they should be sent. On any subject relating to the system, generally. The system console in particular is part of the Hardware Management Console.
In general, consoles are combined on a windowed PC desktop these days. Users typically interact with z/OS using a 3270 terminal emulated by a regular desktop computer.
The 3270 emulator accesses a command line interface called TSO/E (Time Sharing Options/Extended). This is quite limited, but it can do things like ALLOCATE (data sets) and CALL (programs). You can also put commands into a list (CLIST) and call those with a single EXEC 'CLIST <data set name>' command. REXX (REstructured eXtended eXecutor) is another interpreted, shell script-type language which offers the same functionality.
Consoles communicate with z/OS using...
TCP/IP (Transmission Control Protocol/Internet Protocol) and SNA (System Network Architecture)
SNA is a predecessor to TCP/IP. Both of them are communication protocols which operate in layers. The layer models used vary, but have the physical layer at the bottom with increasing levels of abstraction above, up to the application data layer. SNA does the job and has a lot of inertia, so TCP/IP hasn't completely replaced it yet.
The z/OS Communications Server (CS) provides SNA using VTAM (Virtual Telecommunications Access Method), and also TCP/IP. TCP/IP commands such as NETSTAT, PING and NSLOOKUP can be entered from any authorized TSO session. DISPLAY TCPIP and VARY TCPIP display and modify TCPIP settings respectively. TCP/IP allows you to define VIPAs (Virtual Internet Protocol Addresses) which can be dynamically transferred or shared, allowing load balancing among z/OS machines.
z/OS runs only one VTAM address space. Each end point of an SNA connection is known as a logical unit (LU) so a VTAM session is an LU-to-LU session. LUs include displays, printers, POS devices, but also CICS regions (address spaces) (see later). A physical unit (PU, not the same as a processing unit) controls one or more LUs. PU Type 5 is the mainframe; PU Type 4 is any wide-area network communication controller; PU Type 2 is a peripheral attached to either of the first two. Unlike TCP/IP VTAM supports a few different network topologies: subareas, and APPN (Advanced Peer-to-Peer Network). SNA networks are divided into domains. Each domain is a collection of resources managed by a Type 4 or 5 PU. Cross-domain resource managers (CDRMs) allow domains to communicate directly, including cross-domain LU sessions. APPN is a more traditional network with routing, and far superior to subareas. 3270 terminals work via an SNA 3270 data stream.
Most users simply use TSO to run a single program, the panel-based ISPF (Interactive System Productivity Facility).
When searching for data sets in ISPF:
- To search for partial matches, use an *
- If you do not enclose a full data set name in single quotes, the data set name will be prefixed automatically with a default prefix specified in the TSO PROFILE (usually your username). This can be changed using the PROFILE PREFIX command. (This does NOT apply on option 3.4 DSLIST.)
ISPF Editor commands:
- This is the command to run ISPF Editor from the TSO command line
- i, i5
- Insert a line, insert 5 lines
- d, d5, dd/dd
- Delete a line, delete 5 lines, delete a block of lines
- c, c5, cc/cc; then a or b
- Copy a line, copy 5 lines, copy a block of lines; to a point after or before
- m, m5, mm/mm; then a or b
- Move a line, move 5 lines, move a block of lines; to a point after or before
- Exclude a line
Ways to get to a Unix System Services shell on a z/OS system:
- Telnet or rlogin directly from any PC with, say, PuTTY. This requires the inetd daemon to be enabled on the z/OS machine to work.
- Using a 3270 emulator, go into TSO and run OMVS.
In the same way that ISPF is a panel interface replacement for the TSO command line, ISHELL is a panel interface replacement for OMVS.
JCL (Job control Language)
This is the scripting language with which we tell z/OS which programs to execute, and with what data inputs and outputs.
JCL can be used to submit a job for batch processing or to start a procedure or started task.
JCL is all upper case.
Each valid JCL statement:
- Begins with two slashes "//".
- Then a statement label of 1 to 8 alphanumeric characters beginning with a letter.
- Then 1 or more spaces.
- There is then a statement type, "JOB", "EXEC" or "DD".
- Then 1 or more spaces.
- Then additional comma-separated parameters.
- Then 1 or more spaces.
- Everything after this is interpreted as a comment.
Total line length is at most 72 characters (plus 8 for automatically-generated card sequence numbers). Put a comma on the end of a line to indicate continuation to the next line.
Statement types and parameters
There are three major statement types, "JOB", "EXEC" and "DD" and two minor ones, "PROC" and "PEND".
There is only one of these, at the top of the script. The label of this statement provides the job with a name.
- E.g. "6M" specifies the amount of storage to allocate to this job
- Specifies the user whose authority the job is to use
- Specifies the user to notify when the job is completed (e.g. "&SYSUID")
- Delays or holds the job from running, to be released later
- Directs a JCL statement to execute on a particular input queue
- Directs job output to a particular output queue
- Controls the number of system messages to be received
Tells JCL to execute a program. Each EXEC statement is a job step.
- The name of the program to execute. This is OPTIONAL - it could be replaced with, say, the name of another JCL procedure. You can put "EXEC PGM=*.<stepName>.<ddName>" to reference the same data set as was specified in a previous EXEC statement.
- Parameters to pass to the program
- Boolean logic for controlling execution of other EXEC statements in this job (deprecated)
- Imposes a time limit
"DD" (Data Descriptor)
Defines input and output data sets (or I/O devices) for an EXEC statement. Each EXEC statement has multiple DD statements associated with it. The label on a DD statement is the DDNAME and is important since the program named in the EXEC statement will look for these data sources by name. Important DD statement labels are SYSIN and SYSOUT.
- Data Set Name. Does not actually have to exist, we can create it with other parameters below.
- Defines a print location. "//SYSOUT DD SYSOUT=*", for example, says to send output to the Job Entry Subsystem (JES) output area.
- Tape label expected.
- For an input, this means "null input". For an output, this means "discard output"
- "*" e.g. "//SYSIN DD * ..."
- Allows you to insert the input text for the program directly into the JCL file. Terminate it with a "/*".
- "*,DLM=FF" e.g. "//SYSIN DD *,DLM=FF"
- As above, but with a customised terminating delimiter, in this case "FF".
Data set DISPosition. This can come in three formats:
<status> can be any of:
Create a NEW data set and obtain EXCLUSIVE access to it. If the data set already exists you get an error. If you use this then you should provide additional parameters to the DD statement:
- "VOL=SER=" [sic]
- Volume serial
- Device type, invariably 3390
Data set Control Block. Various subparameters:
- Not BLKSIZE!
- Data Set ORGanization e.g. sequential, partitioned, library
Amount of disk storage requested for new data set. Has two parameters.
The first is the unit of measure: TRK, CYL or an average block size (or KB or MB).
The second has three subparameters.
- Primary extent size as a multiple of the first parameter e.g. "SPACE=(CYL,10)" is 10 cylinders, no secondary extents
- (optional) Secondary extent size as a multiple of the first parameter e.g. "SPACE=(TRK,(10,5))" is 10 tracks primary, 5 tracks for each secondary extent
- (optional) Indicates that a PDS is being created - use this many directory blocks for the directory e.g. "SPACE=(TRK,(10,5,8))" is a PDS with 8 directory blocks
- Get an EXISTING data set and obtain EXCLUSIVE access to it. If the data set is already in use you wait.
- Like OLD, but if opened for output, output will be appended to the end of the data set (as opposed to presumably overwriting it?)
- Get an EXISTING data set but do not obtain exclusive access. Other jobs will also be able to access the same data set provided they also use SHR. If the data set is already in exclusive use then you wait.
<normalEnd> and <abnormalEnd> indicate what to do if the entire job step ends normally or abnormally respectively and can be any of:
- Delete and uncatalog the data set
- Keep and uncatalog the data set
- Keep but don't catalog the data set
- Keep and catalog the data set
- Allow a later job step to specify a final disposition
If not specified, the default action is to leave the data set as it was before the job step started.
Note: a single DDNAME can have multiple DD statements attached to it. In practice, these multiple data sets are concatenated for whatever purpose (e.g. if one fills with output, the next is used).
Note: the DSNAME is the actual name of a data set. The DDNAME is the label of the DD statement in which the data set in question is loaded. In any high-level language program, the DDNAME can be referred to explicitly. It is up to you to make sure that the DDNAME in question is appropriately defined when the program is EXECed. e.g. the program contains a line "OPEN FILE=XYZ" so your JCL must include something like "//XYZ DD DSNAME=<anything>".
A DD statement labelled "JOBLIB", placed after a JOB statement, specifies the library to search first when looking for programs executed by this job.
A DD statement labelled "STEPLIB", placed after an EXEC statement, specifies the library to search first when looking for the program executed by this EXEC statement. This overrides JOBLIB.
"PROC" and "PEND"
These mark the beginning and end of JCL procedures (subroutines). "PROC" must have a label so the procedure can be referred to. When you call a procedure with EXEC you can pass pretty much any variables you like (e.g. "//EXEC MYPROC,BLAHBLAH=ZPROF.AREA.CODES") which can be used inside the procedure ("//SORTIN DD DSN=&BLAHBLAH"). The procedure should have default values for these defined at the top, though (e.g. "//DEF PROC BLAHBLAH=SYS1.CODES,...").
DD statements inside procedures can be overridden below the procedure call statement, using "//<stepname>.<DDNAME> DD ...".
Jobs in z/OS are handled by JES, the Job Entry Subsystem.
Receiving jobs into the OS
Scheduling them for processing by z/OS.
Jobs are held in multiple queues.
Jobs are begun by an initiator which is what reads, interprets and executes JCL. The number of jobs that can be running at the same time is less than or equal to the number of initiators which are running at the same time. Each job is run in one initiator('s address space) and each initiator can only run one job at once. Each initiator has one or more job classes (a letter from A to Z, usually) and each job also has a class (specified in JCL in the "//MYJOB JOB CLASS=..." statement). A job can only be initiated by an initiator of the same class. Initiators are started by JES2 at the time of initial program load, or by the operator using JES2 initialization statements. They can also be started automatically by the Workload Manager (later) in order to achieve performance goals.
Controlling their output processing.
Output is also queued, before retrieval by a printer or spool (Simultaneous Peripheral Operations OnLine).
Spooled data is stored in non-standard, inaccessible spool data sets.
Jobs which have been submitted or are in progress are listed in ISPF's System Display and Search Facility, SDSF. SDSF also allows you to directly enter z/OS system commands, monitor and control jobs and view the system log and so on.
Exactly what output is visible from a completed job depends on its MSGCLASS, which may specify to throw the output away.
Filters you can use at the SDSF command line include:
- PREFIX *
- OWNER *
- SYSNAME *
Lifecycle of a job
JES accepts jobs from various sources: input devices, input readers and existing running jobs.
The job is assigned a unique ID and all of its JCL and SYSIN data is placed onto the spool for future processing.
The job is put into the conversion queue.
JES fully interpolates all of the JCL and converts it to converter/interpreter text still on the spool. In the event of errors we jump straight to step 4, otherwise we proceed to step 3.
JES2 has a list of jobs in the queue, and also a list of initiators. When an initiator is done with its current job it requests another one. JES2 handles that request.
The processing phase is where the job actually gets done. SYSIN and SYSOUT are both connected to the spool.
SYSOUT (system-produced output) is processed and grouped with similar output. Depending on output class some may be sent to:
I.e. print queue.
Once all of this is done, the spool can be purged of the job's content, JCL, SYSIN etc.
The Workload Manager (WLM) component is a sysplex-wide component which:
- Classifies the work running on a sysplex into distinct service classes; defines per-class performance goals; uses these to manage work. Effectively an internal SLA
- Primarily takes care of goal achievement
Secondarily takes care of optimal use of system resources, from two contradictory viewpoints:
- The system (which means keeping resources busy)
- Individual address spaces (which means keeping resources free for use)
By way of:
- monitoring resource usage and distributing resources
- address space prioritisation
- page stealing
Which it tracks by monitoring:
- when new RAM is added/removed
- when new address spaces are created/removed
- when swaps begin/end
There is also the Intelligent Resource Director (IRD) which can dynamically modify LPAR weights, channel configurations and I/O operation priorities in order to meet programmed workload goals specified in the WLM.
Finally, Here Is How z/OS Actually Works
Address spaces are divided into four queues:
- IN-READY - in central storage and waiting for dispatch
- IN-WAIT - in central storage but waiting for some event to complete
- OUT-READY - swapped out but ready to execute
- OUT-WAIT - swapped out and waiting for some event to complete
Only the IN-READY address spaces can be selected for dispatch to a processor by the dispatcher component.
Each address space has a top-level, maximum-priority task called the Region Control Task. This can in turn call the ATTACH macro to begin subtasks. Each task is represented by a Task Control Block (TCB).
Because we are talking about multiprocessing, theoretically any task can be interrupted, saved, suspended and allow another task to execute at a higher priority.
Tasks which are ready to run and have a high priority run first.
- Supervisor Call (SVC) - call to a system service e.g. OPEN, GETMAIN, WTO
- I/O - something changes in an I/O channel
- External - operator presses a key or a time interval expires
- Restart - restart signal is received
- Program - program generates an error or page fault
- Machine check - machine fault
Each processor has registers which are data areas INSIDE THE PROCESSOR.
The Program Status Word (PSW) is a 128-bit data area which contains e.g.
- The address of the next program instruction - this must be 31 bits only
- Control information about the program currently running
- Whether the processor can be interrupted right now
- Current Condition Code (CC) for conditional branching
- Addressing mode: 11b = 64-bit; 01b = 31-bit; 00b = 24-bit
- Whether the program is in supervisor state (0) or not (1)
- What the program's key is (0h to Fh, see address spaces)
When an interrupt occurs, the PSW is replaced with a new PSW which explains how to handle the interrupt.
There are also
- access registers (16 bits for address space IDs)
- general registers (64 bits for e.g. storage addresses)
- floating point registers (64 bits)
- control registers (64 bits for the OS itself to use for e.g. DAT table locations)
A Service Request Block (SRB) represents a request for a system service. This serves as input to the SCHEDULE macro. These can only be created by programs running in a higher-authority mode called supervisor state.
After an interrupt is processed, a non-preemptable program will resume instantly, whereas a preemptable program might not.
The dispatcher component decides what gets done next (i.e. after an interrupt or task is completed or suspended or whatever):
- Special exits are done first (e.g. if a processor fails)
- SRBs have next highest priority
- Ready address spaces in priority order come next
Is necessary so that multiple applications can access a single resource without data corruption.
The ENQ and RESERVE macros let programs request exclusive (for RW) or shared (for RO) access to a resource. DEQ releases that access.
ENQ, RESERVE and DEQ are processed by the Global Resource Serialization (GRS) component.
A lock is a named field in storage which says who is using a resource. Locks are in a heirarchy to prevent double-locking.
A note regarding character encoding.
z machines do not use ASCII but EBCDIC.
In order of character value, ASCII goes "space, punctuation, numbers, upper case, lower case".
EBCDIC goes: "space, punctuation, LOWER CASE, UPPER CASE, NUMBERS".
The two are incompatible. Mainframes can use Unicode, however.
Now let's look at the process of application design.
- Batch or online (continuous)?
- Availability and workload requirements
- Exception handling
- Data sources and access methods
Determines which IT resources will be used.
Writes a spec.
Builds, tests, delivers the program based on the spec.
Source code is usually stored in a PDS.
SCLM (Software Configuration Library Manager) is the code repository which comes with ISPF.
After that we go into production
After that is the maintenance phase which lasts forever
Programming for z/OS
Computer languages are machine-dependent (assembly language) or machine-independent (everything else).
Languages are procedural (basically everything) or non-procedural (SQL, RPG, special-purpose querying and report-generating 4GLs).
Languages are compiled or interpreted.
High-level languages have to be compiled by a compiler into the form of an object deck.
Assembly language has a much more direct relationship with machine code - almost to the point of being a simple substitution cipher - and merely needs assembling by an assembler to form the object deck.
This object deck must then be link-edited by a link-editor or binder to create a load module (executable) before it can be executed.
Some useful JCL procedures for COBOL programs include
- COBOL compile: IGYWC
- COBOL compile/link: IGYWCL
- COBOL compile/link/go: IGYWCLG
- COBOL precompile/compile/link: DFHEITVL
Because these procedures are stored in SYS1.PROCLIB, which is automatically incorporated in the search list for any JCL program, you do not need to explicitly locate these procedures using a JOBLIB or STEPLIB statement.
To actually pass COBOL code into one of these procedures, you obviously have to manually override one of the procedure's DD statements. E.g. "//COBOL.SYSIN DD * ..."
Does your COBOL call JNI services? Then include a SYSJAVA source file too.
What about PL/I?
Compiling JCL procedure for PL/I is IBMZC.
Java on z/OS is capable of accessing z/OS data sets as well as being called using COBOL and PL/I interfaces. Likewise Java can call programs written in other languages, using the Java Native Interface (JNI). JNI also lets languages actually create and share Java objects.
CLIST (Command LIST)
CLIST is interpreted. Basically it's a list of TSO/E commands. It does have some extra commands for variables, arithmetic, string handling, loops etc.
Use EXEC CLIST.
REXX is interpreted. It is basically similar to CLIST in its capabilities. However, REXX can also, optionally, be compiled and link-edited. It can then be run on z/VM (and vice-versa). The compiled EXEC can be used as a substitute for an interpreted REXX script.
z/OS Language Environment
This is a common environment for all conforming High-Level Languages (HLLs) e.g. C, C++, COBOL, PL/I, FORTRAN. It provides a single "run-time environment" (i.e. a set of functions that these HLLs are free to call in a standardised way if they wish) of routines for message handling, condition handling and storage management, starting and stopping programs, common math/date/time services etc. These are available through a common interface to all HLLs. It also allows programming languages to call each other. It is a necessary component for programs written in these HLLs to run on z/OS.
The program management model consists of three basic concepts:
- A process is the top level component. Each process has its own address space.
- Each process contains at least one enclave which is a collection of routines making up an application. It consists of one main routine (the first to execute) and zero or more subroutines, plus some shared external data.
- Each enclave contains at least one thread, a basic instance of a particular routine. Threads are independent and have their own registers and condition handling mechanisms. The first thread in an enclave has the main() routine at the top and possibly subroutines running within it.
Any routine can spawn additional threads, enclaves and processes with their own subsidiary stuff.
The z/OS Language Environment also provides a condition-handling model enabling programs to signal conditions to other programs and interrogate information about those conditions.
It has a message-handling model allowing programs to send messages to one another.
It has a storage management model. Allows mixed-language applications access to a central set of storage management facilities.
It's generally very difficult to make sense of.
Many z/OS sites maintain a library of subroutines which are shared across the business. This library might include, for example, date conversion routines. As long as these subroutines are written using standard linkage conventions, they can be called from other languages, regardless of the language in which the subroutines are written.
Compiling and link-editing a program on z/OS
The process is:
Source module (PDS)
A copybook differs from a subroutine in that it is just a piece of raw text which you might copy into a program.
Precompiler or preprocessor
This takes care of special EXEC CICS or EXEC SQL statements which can theoretically be inserted verbatim into programs written in many different languages. It converts the special statements into valid code for compilation.
Preprocessed code is typically/maybe stored in a data set with DDNAME "SYSPUNCH".
Compilation is achieved using a batch job or through TSO/E using commands, CLISTs or ISPF. See above. z/OS includes some useful JCL procedures for this purpose, which are stored in SYS1.PROCLIB.
Object module (PDS)
In the object deck, every instruction, data element and label is given a relative address, starting from zero. The addresses are in the form of a base address plus a displacement. This means a program can be relocated in memory without breaking. Note that the displacement can only go up to 4095; programs larger than 4095 bytes must have more than one base address. References to external programs and subroutines are not resolved yet.
Object code is stored as "SYSLIN".
The DISP parameter of the SYSLIN DD statement indicates what to do with it:
- PASS: pass to the binder after compilation
- OLD: catalog in an existing object library
- KEEP: keep
- CATLG: add to a new object library (an object library is a library of object decks)
It is possible to store an object deck in a temporary library if it is only going to be linked/executed right away.
Input to the binder is "SYSLIN".
Basically this resolves inter-program references, mostly by combining the two programs' object decks into one, internally consistent load module.
The binder's additional features allow you to, additionally:
- generate a different type of object called a program object (??)
- convert program objects to and from load modules
- store program objects and load modules in PDSEs and PDSes respectively
- directly load and then execute modules either of these once they're created
Load module (PDS)
The load module is "SYSLMOD".
Batch loader/program management loader
The batch loader can take either an object deck or a load module and load it into virtual storage for execution.
The program management loader can load load modules or program objects into virtual storage for execution.
A re-entrant program is one that can be run multiple times in multiple threads simultaneously without causing problems. It is analogous to a purely functional program - it does not rely on static/global variables or call non-reentrant code or, as is rare but possible, modify itself.
An online program is different from a batch in that it runs continuously.
A transaction system should pass the ACID test:
- Atomicity. Processes performed by a transaction are done as a whole or not at all.
- Consistency. The transaction must work only with consistent information. (?)
- Isolation. Processes coming from two or more transactions must be isolated from one another.
- Durability. Changes made by a transaction must be permanent.
A two-phase commit transaction works like this:
- There is a coordinating recovery manager called the syncpoint manager, and a range of resource managers.
- Before the UR (Unit of Recovery) makes any changes to a resource, it is in-reset.
- While the UR is requesting changes to resources, it is in-flight.
- At the point when the application is ready to either commit or roll back the changes, the syncpoint manager asks each resource manager to vote on whether its part of the UR (Unit of Recovery) is in a consistent state and can be committed. This is the in-doubt or in-prepare phase.
- Once the votes are back, the syncpoint manager logs what it is about to do. If they all vote YES, the syncpoint manager instructs all the resource managers to commit the changes (in-commit). If any of them vote NO, the syncpoint manager instructs all the resource managers to back out of the changes (in-backout).
- After a transaction is committed, we can mark off a SYNCPOINT.
If applications are developed in many different environments then it may not be possible to have a global syncpoint coordinator. The applications in question may even simply not support two-phase commits.
CICS (Customer Information Control System) is a transaction manager which acts like, and performs many of the functions of, the z/OS operating system:
- manages sharing of resources, data integrity, priority of execution, database requests
- allocates users, resources (storage, processor cycles)
- allows applications to run within it
CICS is effectively a layer of abstraction between the applications and the operating system, enabling all applications within CICS to run in a transaction context. Execution of a transaction involves running one or more applications. It has the same multithreading capabilities as z/OS itself, which means that CICS apps must be re-entrant and may run in multiple instances simultaneously. It has file control, storage management etc.
Each CICS starts as a single address space ("region"), but CICS also supports Multi-Region Operation with regions having specific functions:
- Terminal-Owning Region (TOR)
- Application-Owning Region (AOR)
- File-Owning Region (FOR)
Execute under CICS control, using CICS services and interfaces to access programs and files.
Each application program has a program definition which includes e.g. the language in which the program was written.
Basic Mapping Support (BMS) is a terminal control service which lets easily program your app to create and format menus of options to send to the user.
CICS has a file control service which provides access to VSAM and BDAM data sets.
It has a database control service for DB2 and DL/I databases.
Other CICS services are task control (suspension/resumption), Temporary Storage (TS) and Transient Data (TD) control, interval control (for timed events), storage control and dump and trace control (abends).
The general format of a CICS command, which can be inserted anywhere in COBOL, C, PL/I and assembly language is to just put something like "EXECUTE CICS <function> <option> <option> ... END-EXEC.".
Must be quasi-reentrant.
CICS apps are stored in a program library when not in use.
Transactions in CICS
Users have to sign in to CICS in order to gain authority to invoke transactions.
Each transaction is fired off by a single request consisting of a 1 to 4-character transaction identifier. This is typed at the terminal by the user.
CICS looks up the identifier in the Program Control Table (PCT) which consists of a list of installed transaction resource definitions.
Once located, the transaction definition tells CICS the name and identifier of the first program to load.
CICS looks up the program in the list of program definitions.
This program is loaded and control is passed to it. The program may pass control onwards to any other defined program, of course. EXEC CICS LINK starts a program as a "subroutine" of the current one; EXEC CICS XCTL simply abandons the current program in favour of another. EXEC CICS RETURN returns to the last point where LINK was called (not XCTL).
A non-conversational transaction processes one input, responds, and ends/closes/disappears. A conversational transaction, by contrast, involves the program holding resources while the user may make multiple inputs or changes before hitting the "commit" button. In a pseudo-conversational transaction, multiple non-conversational transactions combine to give the illusion of a single conversational transaction.
IMS (Information Management System)
IMS is a transaction manager and a (heirarchical (!)) database manager and some system services, much like CICS.
Like CICS, IMS works in "regions" (address spaces) and is capable of multitasking like z/OS.
IMS registers itself as a z/OS subsystem.
At their most basic level, a database is a logically structured set of data consisting of entities, the attributes and values of those entities, and the relationships between those entities.
A database has massive, obvious advantages over any kind of flat file storage - it's more efficient and it allows security (e.g. certain entities hidden from certain users). It preserves data integrity, provides a central point of control for the data, prevents duplication of data, allows multiple simultaneous reads/writes.
DBs are managed by a DataBase Administrator (DBA) who designs, implements, maintains, monitors, secures and backs up the database - everything except putting content into it.
Interactions with databases are best understood in terms of functions.
A DataBase Management System (DBMS) is a system for managing databases. In a heirarchical DBMS, the data is navigational which means that an application has to successfully navigate (i.e. know) the structure of the heirarchy of entities in order to query it. In a relational DBMS, this is not the case: entities are represented as rows in tables, each attribute is a column of that table, values are stored in cells in the table and the relations link columns together. There are also temporary tables which hold intermediate query results, and results table which are returned when a query is completed. An RDBMS also includes indices which are pointers to rows of a table. Unlike the rows themselves, indices are sorted. They can be used to accelerate lookups, and to enforce uniqueness. Indices are stored in specialised data sets called index spaces. Lastly, an RDBMS contains keys, which uniquely identify rows and/or enforce referential integrity.
Tables are kept in a table space VSAM data set. These come in three varieties, Simple, Segmented and Partitioned.
A view is basically a pseudotable created by a query from actual tables. By permitting access only to views, not tables, you can selectively allow users access to data.
A storage group is a set of volumes on DASD which hold the data sets in which the tables and indices are stored.
DB2 has its own data types but you can define User-defined Data Types (UDTs) based upon them.
DB2 also has User-Defined Functions (UDFs).
DB2 has triggers which are actions set to occur automatically when an insert, update or delete occurs on a specific table.
Large OBjects (LOBs) (Binary Large OBjects (BLOBs), Character Large OBjects (CLOBs) and Double Byte Character Large OBjects (DBCLOBs)) are stored in auxiliary tables with pointers in the column where they're supposed to be.
DB2 has stored procedures (effectively subroutines).
DB2 maintains tables of metadata about everything else it is storing: SYSIBM.SYSTABLES, SYSIBM.SYSCOLUMNS, SYSIBM.SYSTABAUTH for authorized users and SYSIBM.SYSINDEXES.
DB2 stores events in a log, which is archived when it's full, and can be used to roll back changes.
The DB2 DBA is maintained by the DBA through a set of utilities, submitted using JCL. These include LOAD (populates tables), UNLOAD (move/copy data elsewhere), REORG (reorganize data), RUNSTATS (get stats/performance information), COPY (take DB image), MERGECOPY (merge incremental changes), RECOVER (recover from a DB image), REBUILD INDEX, CHECK (for inconsistencies). DB2 also has its own internal command line, DB2I (DB2 Interactive).
SQL (Structured Query Language)
Falls into three broad categories: DML (Data Manipulation Language) for reading and modifying data, DDL (Data Definition Language) for defining, modifying and dropping DB2 objects, and DCL (Data Control Language) for authorization control.
Is entered using SPUFI (SQL Processing Using File Input), which is part of DB2I and can be used to test and save sequences of SQL queries, and QMF which is less powerful (only one query at a time) but has powerful reporting capabilities. SPUFI queries (input) are typically stored as members in a PDS(E) data set ZPROF.SPUFI.CNTL while its output goes to a sequential data set ZPROF.SPUFI.OUTPUT. Note: reusing the same output would obviously overwrite previous query results.
The terminator between SQL statements is a semicolon, ";".
Is put verbatim (statically or dynamically constructed) into the source code of a program
DB2 has a precompiler which can interpret embedded DB2 calls. DCLGEN can be used to generate these calls (I think). Code precompiled in this way also generates a DataBase Request Module which is "compiled and bound" much like a regular program (this step checks authorization, syntax, and optimization). EXPLAIN can be used to find out about these optimizations. If multiple subroutines have their own DB2 calls then multiple packages are generated, which have to be combined into a plan of all the packages of one project. Handily, the plan does not have to be totally recompiled when a small SQL change is made. The plan name has to be specified when the original load module is run. Cursors allow application programs to retrieve rows from the result set one at a time.
IMS Database Manager
IMS is a heirarchical database system. In IMS a "database" is more like what we usually call a "table". Data types are divided into heirarchical segments. The parent-child relationship is what counts.
IMS, like DB2, makes use of all the z/OS mod cons: runs in multiple address spaces, uses cross-memory services, and functions in a sysplex.
z/OS HTTP Server
z/OS HTTP Server can function as a stand-alone server of relatively static capabilities, a scalable server which responds to traffic changes, and as a combination of multiple instances of the two.
HTTP Server - mercifully - converts text documents from EBCDIC to ASCII before serving them - provided you select the correct FTP transport when you upload them.
CGI (Common Gateway Interface) allows HTTP Server to pass an HTTP request to an application, then receive and transmite the generated response. CGI requires a separate address space for every request, though. Not good. FastCGI can be used to work around this.
HTTP Server can instead incorporate plug-ins, each incorporating one or more servlets which connect to, say, CICS or DB2. The plug-ins can communicate with CICS directly, or via a J2EE server with an EJB container inside it, or even via a J2EE server with both an EJB container and a Web container in it - allowing you to separate all business logic from HTTP Server itself.
Server also offers performance and usage monitoring, trace, logging, SNMP (!), cookies, accurate processing of the HTTP "accept [these data types]" request header, persistent connections, virtual hosts, thread-level security, SSL, LDAP (Lightweight Data Access Protocol), certificate authentication, proxy support and file caching.
The WebSphere HTTP Server plug-in is an IBM product that plugs into various HTTP servers including z/OS. Based on httpd.conf, the server decides whether or not to direct HTTP request to the plug-in, which then uses its own XML config file to decide which application server to pass the request on to.
WebSphere Application Server on z/OS
WAS is IBM's J2EE implementation. It runs in a Java Virtual Machine (JVM) and supports servlets, Java Server Pages (JSPs), Enterprise Java Beans (EJBs), CORBA, HTML, HTTP, IIOP, etc. etc. etc. Internally, WAS has an EJB container and a Web container for everything other than EJBs.
WAS consists of cells, which are the top level objects which potentially span multiple physical z/OS machines through the use of a Coupling Facility. Each cell contains multiple nodes and each node contains multiple servers. Each server consists of a Controller Region (CR) and one or more Servant Regions (SRs) which are started automatically when work arrives. Here, we (traditionally) use "region" to mean "address space". Each node also has a node agent with its own CR, and each z/OS system additionally has a daemon server, also with its own CR. One of the nodes is nominated as a network deployment manager (DM) for the whole cell, and communicates with the node agents to control all of the servers within those nodes. All of this is possible in a Network Deployment configuration; in a Base configuration, a cell contains only one node, there is no DM, and so each node, while it can still contain multiple servers, can only administrate those servers individually rather than as groups.
Servers can also cluster vertically (within an node within an LPAR within a system) and horizontally (across multiple LPARs in multiple nodes in multiple machines).
J2EE is: Functional (does stuff), Reliable (under changing conditions), Usable, Efficient (in use of system resources), Maintainable and Portable (across environments).
On z/OS specifically, WAS can take advantage of consolidation of workloads (many servers on one LPAR for example), high availability through clustering, and integrates with RACF. The Automatic Restart Manager (ARM) restarts servers which die. WAS can performing routing, queueing and prioritization of work based on current system utilization (Performance Index, PI). But performance ultimately depends on well-written applications, which may be a polite way of saying that WAS is not high-performing.
Java program access to external system resources is performed through resource adapters. Their design is specified by the J2EE Connector Architecture (JCA). Adapters have to cope with many different protocols and resource designs. Resource connections can be time-consuming to create, must be secure, high-performing, monitorable with good diagnostics and good quality of service. WAS has adapters for CICS, DB2 and IMS. Separate from this concept are connectors which are external to WAS - these allow CICS (via CICS Transaction Gateway), DB2 (via DB2 JDBC (Java DataBase Connectivity)) and IMS (via IMS Connect) to complementarily connect to various application servers. JDBC in particular is a standard interface for all databases, while IBM provides a driver to adapt it to work with DB2.
Messaging and Queuing
Messaging is the concept that programs communicate by sending each other messages rather than by calling each other directly.
Queuing means that the messages are placed in queues in storage, allowing various logically unrelated programs to access them at various speeds and times.
WebSphere MQ is IBM's messaging/queuing product. The WebSphere MQ API is called the Message Queue Interface (MQI). In synchronous communication, program A puts a message in a queue for program B. Eventually, B takes the message, processes it and puts a response in a queue for A. Finally, A is able to pick up the response and continue to process. In asynchronous communication, A does not wait for a response before continuing to process. Another program, C, picks up the responses from B. Or, A could carry on with processing for some time before retrieving the response.
MQ has four message types: datagram (no response expected), request (reply expected), reply (to a request), report (describes an event such as confirmation of delivery or an error).
Queues are managed by a queue manager which ensures that queues are stored safely and are recoverable in the event of outages, and that messages are delivered precisely once (assured delivery).
- Local queues are owned by this queue manager.
- Remote queues are owned by another queue manager and are not "real" - all you see locally is a definition of a remote queue. Programs cannot read messages from remote queues.
- A transmission queue is a local queue of outbound messages used internally by the QM.
- An initiation queue is monitored by the trigger manager which triggers applications when a suitable message arrives, and the channel initiator which can start inter-QM transmissions when a suitable message arrives.
- A dead-letter queue stores undeliverable messages (e.g. destination queue is full, does not exist, is put-inhibited; user is not authorized; message has a duplicate sequence number, is too large.
Channels are logical communication links. Message channels connect QMs using MCAs (Message Channel Agents). MQI channels connect QMs to WMQ clients.
A reliable message transport can break a single transaction spanning multiple systems (write to local DB, send message to remote mirror DB, write to mirror DB) into three (write to local DB, put message into outbound queue; move message from local to mirror system; retrieve message, write to mirror DB), two of which are local (hence closing very fast and with low probability of failure) and one of which is handled relatively safely by MQ.
WMQ has adapters for CICS, IMS, and batch or TSO/E (?).
z/OS System Programming
As performed by a SysProg - the guy who installs, customizes, manages and maintains the mainframe and its operating system so that it meets its SLAs.
Working on mainframes usually - for size, workload or even audit reasons - requires a certain amount of *separation of duties* among various people. The exception might be test systems.
System libraries come in several large classifications, distributed across volumes:
- z/OS system libraries are stored in system residence volumes (SYSRES). These can be backed up, modified, tested, IPLed-from, and rolled back. z/OS fixes are managed using System Modification Program/Extended (SMP/E).
- Non-z/OS IBM software libraries (DB2, CICS) and non-IBM software are usually kept on separate volumes from SYSRES.
- Customization data is useful things like SYS1.PARMLIB, SYS1.PROCLIB, the master catalog (of data sets), page data sets, JES spools and SMP/E itself.
- User data - this is the largest pool of volumes, managed by System Managed Storage (SMS) based on frequency of access and so on.
Where z/OS looks for a program when it's requested through a system service, in sequence:
- Job Pack Area (JPA) in storage (i.e. somewhere already in memory).
- TASKLIB (?)
- STEPLIB - specified as a DD under the EXEC statement
- JOBLIB - specified as a DD under the JOB statement
Link Pack Area (LPA) (i.e. stuff in virtual storage)
- Dynamic LPA modules as specified in PROGxx members
- Fixed LPA modules, as specified in SYS1.PARMLIB(IEAFIXxx)
- Modified LPA modules, as specified in SYS1.PARMLIB(IEALPAxx)
- Pageable LPA modules, as specified in SYS1.PARMLIB(LPALSTxx)
Libraries specified in PROGxx and LNKLSTxx
Changes are strictly controlled by disciplined procedures and audit:
Service managers manage SLAs.
Operations staff control the implementation of changes, and also correct problems arising from the change/back out from the change.
System programmers are usually those who originate and implement changes.
Changes are given a risk assessment.
Changes are stored in a change control record system as a partner to a program management system. This is required under Sarbanes-Oxley (2002).
Change records record who, what, when, where, how, priority, risk and impact.
Production control staff manages backups, taking down/bringing up databases as part of a schedule, and running new code (i.e. modified business applications) into production.
SMP/E (System Modification Program/Extended)
Software is composed of load modules, but also macros, help panels, CLISTs and other z/OS library members, all collectively known as software elements. A SYStem MODification or SYSMOD is a package of elements and control information which SMP/E can use to install or modify software. SYSMODs come in four varieties:
- FUNCTION - a new product, product release, or product function. Come in base (new stuff) or dependent (modify existing stuff) varieties.
- PTF - Program Temporary Fox. Fixes a bug for everybody, pre-emptively for the majority of users.
- APAR - Authorized Program Analysis Report. Fixes a bug for a specific user/environment/customer, after it has arisen. Not for wide release.
- USERMOD - one you made yourself.
PTFs can depend on other PTFs, APARs on PTFs and other APARs, and USERMODs on PTFs, APARs and other USERMODs. SYSMODs can also end up installed in multiple places. If a further SYSMOD modifies one of those locations, that change needs to be propagated to the other locations.
SMP/E tracks all elements, their modifications and their updates, using modification identifiers: Function Modification Identifiers, Replacement Modification Identifiers and Update Modification Identifiers (FMIDs, RMIDs and UMIDs). SMP/E stores and tracks all of this information in the Consolidated Software Inventory (CSI), a collection of VSAM data sets.
We have distribution libraries (master copies) and target libraries (live code).
SMP/E has three basic verbs:
- RECEIVE: take a SYSMOD into SMP/E
- APPLY: install a SYSMOD into target libraries
- ACCEPT: if APPLY worked, install a SYSMOD into distribution libraries
RACF (Resource Access Control Facility) is the most well-known feature of IBM Security Server. Others include a firewall, LDAP server, Kerberos network authentication, Enterprise Identity Mapping between and Public Key Infrastructure.
RACF basically allows you to identify and authenticate users (using a password), authorize users to access protected resources and log, report unauthorized access attempts, and allow applications to use RACF macros.
SAF (System Authorization Facility) is part of z/OS. The SAF router is called when a resource managing component or subsystem reaches what is known as a *control point* - and it can function happily without RACF, or with something other than RACF instead, but works best with RACF.
The security administrator is the top-level user in RACF, and is the one with the SPECIAL attribute. Access authorities to RACF resources are NONE, READ, UPDATE, ALTER and EXECUTE.
CICS and DB2 use RACF. Operator consoles can be limited in their capabilities by RACF.
The Authorized Program Facility (APF) allows selected programs (as listed in APF libraries, SYS1.LINKLIB, SYS1.SVCLIB, SYS1.LPALIB, and whatever else you authorize) access to sensitive system functions. APF-authorized programs are basically omnipotent, able to put themselves into the supervisor state, use PSW keys 0d to 7d execute privileged instructions and override storage protection, and disable logging.
z/OS Utility Programs
- Returns 0 (success) and does nothing else. Useful if you want to do isolated DD statements
- Copies one sequential data, SYSUT1 set to another, SYSUT2, with control parameters SYSIN and output SYSPRINT
- Copies member(s) of a PDS to another, compress a PDS. ISPF option 3.3 uses this under the covers
- Generates random record sets containing various types of data
- Create and manipulate VSAM data sets
- Create multiple members in a PDE or update records within a member - used mainly for program distribution/maintenance, to create/maintain JCL procedure libraries
- Give VTOC information for data sets
- Write standard labels on tapes
- Superseded by IDCAMS
- Initialises disk volumes
- Patch disk records (basically provides a bit-level diff for update purposes)
- Disk dump/restore
- Resource Measurement Facility
About the mastery test
90 minutes. Lots of time left over, not time consuming. You can leave early if you wish!
65 questions. All taken from the book. Multiple choices, 4 options, 1 correct answer. For acronyms with 4 options, make a note and come back, other questions may jog your memory. A broad understanding of the question will usually knock out a few possibilities.
Pass mark is 75% (49/65) on practice, 66% (43/65) on real (seems harder).
You are NOT given the answers at the end of either test; if there are questions you are uncertain about, write them out in FULL for future reference. Final marks are broken down into 4 categories, "Application Programming on z/OS", "Introduction to z/OS and the Mainframe Environment", "Online Workloads for z/OS" and "System Programming on z/OS" with 10, 40, 8 and 7 questions respectively - but they are not in any order and you will not know which question falls into which category. These correspond to parts of the book.
Questions can come from literally anywhere in the WHOLE book, can't skip any chapter at all.