IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

MVS I/O like unix pipes


IBM Mainframe Forums -> PL/I & Assembler
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Wed Apr 02, 2014 3:10 pm
Reply with quote

Hello, everybody.

I want to discuss what is the best way of just calling some application and retrieving
it's SYSPRINT without affecting the spool. In example, it might be required to call some relict program, like
DSNUTILB, that doesn't support custom DD names for IO. In Unix, there are pipes, and just one function
popen could be called to achieve this. But as far as I see, there is no analogue of pipes in the MVS.

Theoretically, the following 2 ways could be used:
1. One could just create a new address space using ASCRE, and isolate those standard DD's (SYSIN and
SYSPRINT) in there. That's how Unix do it. But it's something complicated, for example, in order to do it,
one should place an initialization routine either in LPA or in LNKLST.
2. SVC screening, another complex way. One could intercept SVC's 19, 20, 22: OPEN, CLOSE and OPEN TYPE=J
respectively, in order to change standard DD names to user specified ones.

Maybe I'm missing something, and there are some obvious way to do it? Because 2 ways mentioned above don't
look like an ordinary programming interfaces. I'm seeking for a painless way to do it.

Thanks,
Vadim.
Back to top
View user's profile Send private message
David Robinson

Active User


Joined: 21 Dec 2011
Posts: 199
Location: UK

PostPosted: Wed Apr 02, 2014 3:18 pm
Reply with quote

I may be missing your point, but generally if I want to read some SYSPRINT output from a job I would just write the SYSPRINT to a dataset instead of the spool.

You can always run an extra step to then copy it to the spool if required.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Wed Apr 02, 2014 6:29 pm
Reply with quote

If you want pipes, have you used your favourite search engine?

Why would you need customisable DDNAMES?

If you want to compare and complain that OS/MVS/z/OS is not like *nix, why don't you instead wonder why *nix is not like the other? It is an odd sort of thing to do.

If you explain clearly what it is that you want, with examples if necessary, then there may be suggestions to be made. With what you have said, I think David Robinson has a sufficient answer.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Apr 02, 2014 6:55 pm
Reply with quote

Hello,

FWIW, we (the industry) have been writing to a disk file and then copying to the spool when needed for a Very Long time. Unless i missed a memo, this is typically the way this is done.
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Wed Apr 02, 2014 7:11 pm
Reply with quote

Hello, guys. Thanks for the replies.

Simple example, my program needs to invoke DSNUTILB, this program will pollute my application's SYSPRINT, I'm trying to avoid it. As for the proposed method of redirecting SYSPRINT to some dataset, I believe it's wrong, because firstly - the application must have rights for at least creation of that dataset, it's not the good requirement, second - what if you want to invoke two programs concurrently and retrieve their SYSPRINT within your single address space? There is only one SYSPRINT per address space.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Apr 02, 2014 7:25 pm
Reply with quote

Hello,

What is an issue to you has been the "norm" on the mainframe . . .

Suggest you concentrate on the ways you can use the mainframe to get what you need done and Not spend time being frustrated about how the mainframe does not work just like other environments.

If all you need to do is write a dataset and then parse out the :keepers" there should be no "rights" problem. Every job can create temoprary datasets that are not retained for posterity, but are used for the single process only. Then the space is returned to the system.

Quote:
what if you want to invoke two programs concurrently and retrieve their SYSPRINT within your single address space?
On a normal mainframe application there is NO need to do this. Processes run serially, Not concurrently.
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Wed Apr 02, 2014 7:36 pm
Reply with quote

Dick,

I need to tell that there is no frustration in my posts icon_biggrin.gif

Quote:
On a normal mainframe application there is NO need to do this. Processes run serially, Not concurrently.


However, I need to run DB2 utilities concurrently, so in my application several tasks run DSNUTILB concurrently, and at the moment, I don't see any method to synchronize them except serialization of their access to DSNUTILB via ENQ service. Without synchronization the outputs from several DSNUTILB will be interleaved in SYSPRINT.

Thanks,
Vadim.
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Wed Apr 02, 2014 7:42 pm
Reply with quote

Quote:
I need to run DB2 utilities concurrently,


WHY ?

wouldn't it be simpler to submit multiple jobs?
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Wed Apr 02, 2014 7:52 pm
Reply with quote

enrico-sorichetti wrote:
Quote:
I need to run DB2 utilities concurrently,


WHY ?

wouldn't it be simpler to submit multiple jobs?


Just because it's the application requirement to serve multiple clients and to perform for them such a tasks.

Job is a huge overhead, one job per DSNUTILB invocation will reduce the performance greatly.

By the way for ones who are intersted in this question, there is a workaround:
DSNUTILU stored procedure could invoke DB2 utilities, but there are a lot of problems regarding it's use (like secondary authorization ids), so be aware
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Apr 02, 2014 8:09 pm
Reply with quote

Hello,

Quote:
Job is a huge overhead, one job per DSNUTILB invocation will reduce the performance greatly
. Why is this believed? What statistics or comparative runs have been gathered to show this?

I ask because some of my clients do work at the multi-terabyte volume and have no concern about interleaving or overhead running multiple jobs instead of one catch-all.
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Wed Apr 02, 2014 8:19 pm
Reply with quote

dick scherrer wrote:
Hello,

Quote:
Job is a huge overhead, one job per DSNUTILB invocation will reduce the performance greatly
. Why is this believed? What statistics or comparative runs have been gathered to show this?


It's a good question about statistics, Dick icon_smile.gif Of course, I can compare the CPU time consumption of direct invocation and JES-based one, and maybe I will share the results in this topic. But even the subjective perception is obvious for me, when I'm running an utility directly, it's the matter of milliseconds, and in job's case it's seconds (on my site at least). Just because it's JES with some rules for workload, job initialization, resource allocation and so forth.
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Wed Apr 02, 2014 8:55 pm
Reply with quote

Quote:
ust because it's the application requirement to serve multiple clients and to perform for them such a tasks.

pretty poor design icon_cool.gif

Quote:
Job is a huge overhead, one job per DSNUTILB invocation will reduce the performance greatly

numbers please
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Thu Apr 03, 2014 4:50 pm
Reply with quote

enrico-sorichetti wrote:

pretty poor design icon_cool.gif

Multithreading is a poor design? Lots of apps use parallel code execution, for example IBM's WLM multiplies it's own address spaces in order to run stored procedures concurrently.

OK, the results of a simple test:
Job, 100 steps of DSNUTILB - 9 seconds.
Direct invocation of DSNUTILB in loop with 100 iterations - 6 seconds.
50% overhead is huge I believe.
Also,
Direct invocation - 1 processor instruction "BC".
Job invocation - thousands of machine instructions related to job preparation.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sun Apr 06, 2014 6:46 am
Reply with quote

Hello,

One of the "things" about statistics is they can tell different stories using the same values . . .

While 50% could be considered "Hi/Huge", 3 seconds might not.

Similar happens with multithreading. If there are tasks that use separate resources, multi-threading can improve thruput. One of my clients had many threads that tried to run concurrently (multi-threaded) and more time was spent "taking turns" than getting the work done.

I do not believe there is one answer that fits all.
Back to top
View user's profile Send private message
steve-myers

Active Member


Joined: 30 Nov 2013
Posts: 917
Location: The Universe

PostPosted: Sun Apr 06, 2014 10:40 am
Reply with quote

vshchukin wrote:
Hello, guys. Thanks for the replies.

Simple example, my program needs to invoke DSNUTILB, this program will pollute my application's SYSPRINT, I'm trying to avoid it. ...
If your application is running DSNUTILB from within your application, and DSNUTILB is "polluting" your application's SYSPRINT, I would suggest your application use a different DD name. In concept this has been an issue in OS/360 type systems since OS/360 Release 1 for more than 40 years, and the solution has always been to use a different DD name; for example the sort products have used DD name SYSOUT for their SYSPRINT type output, and the linking loader product (and the equivalent Binder function) have used DD name SYSLOUT for this reason.

I started using OS/360 Release 13 in 1968. The IEBPTPCH utility was the most common way to "print" a source type PDS. The JCL specified SYSOUT=A for both DD names SYSPRINT and SYSUT2. The shop I used in those days used the original version of MFT and HASP. In MFT, SYSOUT=A directed output to one real printer. IEBPTPCH would print a member and print a message, something like END OF DATA FOR SDS OR MEMBER. In 1969 they upgraded their system to MFT-II. Each DD statement with SYSOUT=A created a real data set; IEBPTPCH stopped printing the END OF DATA message with the member; it just appeared in the SYSPRINT data set; it was then I realized the END OF DATA message always belonged in the SYSPRINT data set rather than "polluting" the SYSUT2 data set!
Back to top
View user's profile Send private message
PeterHolland

Global Moderator


Joined: 27 Oct 2009
Posts: 2481
Location: Netherlands, Amstelveen

PostPosted: Sun Apr 06, 2014 11:36 am
Reply with quote

The following could be of interest/help :

Batch Modernization on z/OS SG24-7779-01

Chapter 16 : Increasing concurrency by exploiting BatchPipes
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Sun Apr 06, 2014 12:37 pm
Reply with quote

Thank you guys for all the proposed suggestions,

@Dick
I think the solution that could fix all the issues could be an address space creation since address spaces shouldn't affect each other. But I don't sure if there will be any pitfalls on this way.

@Steve
I would love to use a different DD name, but actually I'm working on the application that wasn't exclusively written by me, and for some reason they are using SYSPRINT as primary log DD. They could be understood since SYSPRINT should be considered like "standard output stream", and maybe it would be used as an API for some external apps. As for IEBPTPCH utility, that's a funny story icon_smile.gif

@Peter
Thanks for the BatchPipes hint, I will research if it could workaround the problem. One important moment in this story is that normally output retrieval should work like "in-memory" communication between programs, avoiding use of any external memory, particularly HDD's.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Apr 07, 2014 7:09 am
Reply with quote

Hello,

I believe you had the solution you wanted to use and have been looking for some way(s) to make this the proper choice . . .

Quote:
One important moment in this story is that normally output retrieval should work like "in-memory" communication between programs, avoiding use of any external memory, particularly HDD's.
I do not recall seeing this in the industry . . . Where (which publication(s)) did you find this guidance?
Back to top
View user's profile Send private message
vshchukin

New User


Joined: 13 Mar 2014
Posts: 11
Location: Czech Republic

PostPosted: Tue Apr 08, 2014 3:58 pm
Reply with quote

Dick,

Quote:
Where (which publication(s)) did you find this guidance?


I don't think IBM have a publication with DASD versus main storage access speed comparison. Generally, random access is 100k faster when main storage is used instead of DASD. Whether it is a requirement, depends on the application. But I would prefer to avoid using datasets, I believe datasets are for long term data storage.
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2455
Location: Hampshire, UK

PostPosted: Tue Apr 08, 2014 6:35 pm
Reply with quote

For 50 years (and one day) the IBM 360, and later machines, have used datasets for passing data between programs in a job and between jobs. That is the way they work unlike the younger technologies such as Unix and DOS which pipe data (but that is probably stored in a temporary file that you do not see).
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Tue Apr 08, 2014 7:18 pm
Reply with quote

Yup - keep in mind that in Unix "everything is a file". . .

And on most machines there is not enough memory on the system to accommodate very large files - so they Must be written and re-read. . .
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> PL/I & Assembler

 


Similar Topics
Topic Forum Replies
No new posts Creating Unix Directory using COBOL i... COBOL Programming 2
No new posts special characters pipes "|"... COBOL Programming 1
No new posts Batch JCL to Copy from Unix Directory... JCL & VSAM 5
No new posts Trigger mainframe job , when file pla... All Other Mainframe Topics 2
No new posts mainframe ps file to unix box All Other Mainframe Topics 4
Search our Forums:

Back to Top