No matter how well you plan the design of a database file, it will eventually need to be changed. When a file is changed, all the programs using the file need to be re-compiled. If this is not done, then while executing the program, you will get a level check error (CPF4131).
To avoid this, either the (i) file needs to be compiled with Level Check (*NO) or (ii) the programs using the file need to be re-compiled. The first solution is very simple since you just need to set Level Check as *NO while compiling the file.
If you need to maintain the data integrity, then it's better to re-compile all the programs that are using the particular file(s).
If you choose to use the second method, the first thing you must know is all the programs that are using the particular file(s). One way of finding this out is to check the sources of all the programs for the particular file. This will be an easy task if you have only one library where the sources of all the programs are present.
But if the sources are spread across different libraries and there are lots of libraries present, then it will be a tedious task to find all the programs that are using the file(s) and there are always possibilities that one or more programs might get missed out.
In order to overcome the above problem, iSeries provides us with the DSPPGMREF command. This command provides us with the list of the objects used by a specified program. Use this command and then take F4.
Type choices and press Enter.
Program . . . . . . . . . . . . > *ALL Name, generic*, *ALL
Library . . . . . . . . . . . > *ALLUSR Name, *LIBL, *CURLIB...
Output . . . . . . . . . . . . . > *OUTFILE *, *PRINT, *OUTFILE
Object type . . . . . . . . . . > *ALL *ALL, *PGM, *SQLPKG...
+ for more values
File to receive output . . . . . Name
Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB
Output member options:
Member to receive output . . . *FIRST Name, *FIRST
Replace or add records . . . . *REPLACE *REPLACE, *ADD
Enter the values as given above and in the File to receive output Field, give the name of the physical file to be created and in the Library, give the name of the library where it should reside and press Enter.
This creates an outfile in the library mentioned (Lets say the outfile is DSPOUTPUT created in the library QGPL).
Now, run a query on this file with the following selection criteria and press Enter.
WHFNAM EQ 'Filename'
Where the 'Filename' was will be the physical, logical or the display file that needs to be delivered.
The output of this query shows all the programs that are using the particular file. This helps you to determine all the programs that are using the particular file and then you can re-compile the programs accordingly.
This outfile also contains the information regarding the mode in which the file is opened. This can be obtained with the WHFUSG field.
This outfile is not only used for finding the programs that are using a particular file(s), but also the programs that are calling a particular program. Example, if you want to find out all the programs that are calling a particular program (lets say program ABC), then just give the following selection criteria.
WHFNAM EQ 'ABC'
The output of this query will show all the programs that are calling program 'ABC'.
Monday, March 31, 2008
Learn about Leap Year:
One year is not exactly equal to 365 days and 6 hours (365.25). But it is 365.242374 days as per the "Vernal Equinox Year". If you make one day extra for every four year then the next century will be one day in advance than the original.
To adjust that we decrease one day on every century although that is a leap year.
So as per that calculation the actual number of days per year will be 365 + 1/4 - 1/100 + 1/400 = 365.2425
365 - Normal days per year
1/4 - We add one day for every four year
1/100 - We remove one day every century
1/400 - But for every 400 year we add one day
But even this day (365.2425) also 0.000125 days more than the actual days (365.242374) per year.
This means in the year 8000 we need to subtract one day though that is divisible by 4, 100 and 400.
But this may not be necessary as the "vernal equinox year" will have a minor change which may be adjusted.
Vernal equinox year - The sun cross directly over the earth's equator. This time the season will change to spring.
For the year 2008 it is on March 20, 2008 at 1.48 AM EST.
To adjust that we decrease one day on every century although that is a leap year.
So as per that calculation the actual number of days per year will be 365 + 1/4 - 1/100 + 1/400 = 365.2425
365 - Normal days per year
1/4 - We add one day for every four year
1/100 - We remove one day every century
1/400 - But for every 400 year we add one day
But even this day (365.2425) also 0.000125 days more than the actual days (365.242374) per year.
This means in the year 8000 we need to subtract one day though that is divisible by 4, 100 and 400.
But this may not be necessary as the "vernal equinox year" will have a minor change which may be adjusted.
Vernal equinox year - The sun cross directly over the earth's equator. This time the season will change to spring.
For the year 2008 it is on March 20, 2008 at 1.48 AM EST.
Friday, March 28, 2008
Save one file from more than one library:
A save file can contain other save files, so here's a method you can try. To keep it simple, let's say you want to save the contents of two libraries--MYLIB1 and MYLIB2--to one save file--SOMELIB/SOMESAVF.
1. Create a save file for each library.
CRTSAVF FILE(SOMELIB/MYLIB1)
CRTSAVF FILE(SOMELIB/MYLIB2)
2. Use the appropriate SAVxxx command to load the save files.
SAVLIB LIB(MYLIB1) DEV(*SAVF) SAVF(SOMELIB/MYLIB1)
SAVLIB LIB(MYLIB2) DEV(*SAVF) SAVF(SOMELIB/MYLIB2)
3. Save the save files to the single save file.
SAVOBJ OBJ(MYLIB*) LIB(SOMELIB) DEV(*SAVF) +
SAVF(SOMELIB/SOMESAVF)
Let's say you want to restore program object DOIT to some system. Here's what you'd have to do.
1. Create the individual save file if necessary.
CRTSAVF FILE(SOMELIB/MYLIB1)
2. Restore the objects from the single save file to the individual save file.
RSTOBJ OBJ(*ALL) SAVLIB(SOMELIB) DEV(*SAVF) +
SAVF(SOMELIB/SOMESAVF)
3. Use the appropriate restore command to restore objects from the individual save file.
RSTOBJ OBJ(DOITC) SAVLIB(MYLIB1) DEV(*SAVF) OBJTYPE(*PGM)
SAVF(SOMELIB/MYLIB1) RSTLIB(QTEMP)
1. Create a save file for each library.
CRTSAVF FILE(SOMELIB/MYLIB1)
CRTSAVF FILE(SOMELIB/MYLIB2)
2. Use the appropriate SAVxxx command to load the save files.
SAVLIB LIB(MYLIB1) DEV(*SAVF) SAVF(SOMELIB/MYLIB1)
SAVLIB LIB(MYLIB2) DEV(*SAVF) SAVF(SOMELIB/MYLIB2)
3. Save the save files to the single save file.
SAVOBJ OBJ(MYLIB*) LIB(SOMELIB) DEV(*SAVF) +
SAVF(SOMELIB/SOMESAVF)
Let's say you want to restore program object DOIT to some system. Here's what you'd have to do.
1. Create the individual save file if necessary.
CRTSAVF FILE(SOMELIB/MYLIB1)
2. Restore the objects from the single save file to the individual save file.
RSTOBJ OBJ(*ALL) SAVLIB(SOMELIB) DEV(*SAVF) +
SAVF(SOMELIB/SOMESAVF)
3. Use the appropriate restore command to restore objects from the individual save file.
RSTOBJ OBJ(DOITC) SAVLIB(MYLIB1) DEV(*SAVF) OBJTYPE(*PGM)
SAVF(SOMELIB/MYLIB1) RSTLIB(QTEMP)
Thursday, March 27, 2008
Business Process Reengineering:
Business process reengineering (BPR) is a management approach aiming at improvements by means of elevating efficiency and effectiveness of the processes that exist within and across organizations. The key to BPR is for organizations to look at their business processes from a "clean slate" perspective and determine how they can best construct these processes to improve how they conduct business.
Business process reengineering is also known as BPR, Business Process Redesign, Business Transformation, or Business Process Change Management.
The following outline is one such model, based on the PRLC (Process Reengineering Life Cycle) approach.
1. Envision new processes
1. Secure management support
2. Identify reengineering opportunities
3. Identify enabling technologies
4. Align with corporate strategy
2. Initiating change
1. Set up reengineering team
2. Outline performance goals
3. Process diagnosis
1. Describe existing processes
2. Uncover pathologies in existing processes
4. Process redesign
1. Develop alternative process scenarios
2. Develop new process design
3. Design HR architecture
4. Select IT platform
5. Develop overall blueprint and gather feedback
5. Reconstruction
1. Develop/install IT solution
2. Establish process changes
6. Process monitoring
1. Performance measurement, including time, quality, cost, IT performance
2. Link to continuous improvement
Loop-back to diagnosis
BPR, if implemented properly, can give huge returns.
Wednesday, March 26, 2008
Create iSeries Screen on the Fly:
This code snippet creates the iSeries screen on the fly. For example, during run time when you call this program, it uses the IBM supplied Dynamic Screen API's.
To clear subfile, we can use 'QsnClrScr' API.
To write data to the display file we can use 'QsnWrtDta' API.
This is just the starting point for everyone to develop on DSMs.
0001.00
0002.00 FEMPPF IF E K DISK
0003.00
0004.00 * Function Keys
0005.00 D F_HELP C X'31'
F1 - KEY
0006.00 D F_EXIT C X'33'
F3 - KEY
0007.00
0008.00 * Program Constants
0009.00 D TEMP C X'20'
0010.00 D TEMP1 C X'00'
0011.00 D C_HEAD C CONST('EMPLOYEE
DETAILS')
0012.00
0013.00 * Work Variables
0014.00 D HEADTEXT S 128A
0015.00 D WSTEXT S 128A
0016.00 D WSEMPNO S LIKE(EMPNO)
0017.00 D WSEMPNAM S LIKE(EMPNAM)
0018.00 D WSEMPSEX S LIKE(EMPSEX)
0019.00 D WSEMPAGE S LIKE(EMPAGE)
0020.00 D WSEMPADDR1 S LIKE(EMPADDR1)
0021.00 D WSEMPADDR2 S LIKE(EMPADDR2)
0022.00 D WSEMPSTATE S LIKE(EMPSTATE)
0023.00 D WSEMPADDR S 55A
0024.00 D TEXTLENGTH S 9B 0 INZ(32)
0025.00 D ROW S 9B 0 INZ
0026.00 D COLUMN S 9B 0 INZ
0027.00 D ROWCNT S 9 0 INZ(6)
0028.00 D COL3 S 9 0 INZ(3)
0029.00 D COL7 S 9 0 INZ(7)
0030.00 D COL16 S 9 0 INZ(16)
0031.00 D COL26 S 9 0 INZ(26)
0032.00 D COL32 S 9 0 INZ(32)
0033.00 D COL35 S 9 0 INZ(35)
0034.00 D COL70 S 9 0 INZ(70)
0035.00 D ERROR S 8
INZ(x'0000000000000000')
0036.00 D AID S 1
0037.00 D LINES S 9B 0 inz(1)
0038.00 D WF1 S 1
0039.00 D SCREEN S 9B 0
0040.00
0041.00 * API to clear the screen
0042.00 D CLRSCREEN PR 9B 0 EXTPROC('QsnClrScr')
0043.00 D MODE 1A OPTIONS(*NOPASS)
CONST MODE
0044.00 D CMDBUFHNDLE 9B 0 OPTIONS(*NOPASS)
CONST COMMAND BUFFER HANDLE
0045.00 D LOWLVLENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0046.00 D ERRCDE 8A OPTIONS(*NOPASS)
ERROR CODE
0047.00
0048.00 * API to write data to the screen
0049.00 D WRTDATA PR 9B 0 EXTPROC('QsnWrtDta')
0050.00 D DATA 128
DATA TO BE WRITTEN
0051.00 D DATALEN 9B 0
LENGTH OF THE DATA
0052.00 D FEILDID 9B 0 OPTIONS(*NOPASS)
CONST FIELD ID
0053.00 D ROW 9B 0 OPTIONS(*NOPASS)
CONST ROW
0054.00 D COLUMN 9B 0 OPTIONS(*NOPASS)
CONST COLUMN
0055.00 D STRMATR 1A OPTIONS(*NOPASS)
CONST STARTING MONOCHROME ATTRIBUTE
0056.00 D ENDMATR 1A OPTIONS(*NOPASS)
CONST ENDING MONOCHROME ATTRIBUTE
0057.00 D STRCOLATR 1A OPTIONS(*NOPASS)
CONST STARTING COLOR ATTRIBUTE
0058.00 D ENDCOLATR 1A OPTIONS(*NOPASS)
CONST ENDING COLOR ATTRIBUTE
0059.00 D CMDBUFHNDLE 9B 0 OPTIONS(*NOPASS)
CONST COMMAND BUFFER HANDLE
0060.00 D LOWLVLENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0061.00 D ERRCDE 8A OPTIONS(*NOPASS)
ERROR CODE
0062.00
0063.00 D GetAID PR 1A EXTPROC('QsnGetAID')
0064.00 D AID 1A OPTIONS(*NOPASS)
0065.00 D ENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0066.00 D ERRCDE 8A OPTIONS(*NOPASS)
ERROR CODE
0067.00
0068.00 D RollUp PR 9B 0 EXTPROC('QsnRollUp')
0069.00 D LINES 9B 0
CONST
0070.00 D TOP 9B 0
CONST
0071.00 D BOTTOM 9B 0
CONST
0072.00 D CMDBUFHNDLE 9B 0 OPTIONS(*NOPASS)
CONST COMMAND BUFFER HANDLE
0073.00 D LOWLVLENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0074.00 D ERRCDE 8 OPTIONS(*NOPASS)
ERROR CODE
0075.00
0076.00
********************************************************************
0077.00
0078.00 *Clear Screen Subroutine
0079.00 C EXSR CLRSCR
0080.00 *Subroutine to write the Screen Headings
0081.00 C EXSR HEADSR
0082.00 *Subroutine to write the Screen Footer
0083.00 C EXSR FOOTSR
0084.00 *Subroutine to write the Data
0085.00 C EXSR WRTDTASR
0086.00
0087.00 C EVAL *INLR = *ON
0088.00 C RETURN
0089.00
0090.00
********************************************************************
0091.00 * Clear Screen Subroutine
0092.00 C CLRSCR BEGSR
0093.00
0094.00 * Call the CLRSCR procedure to clear the screen initially
0095.00 * The values passed are
0096.00 * Mode = 4, Set the screen to 27 * 132 mode
0097.00 * Command Buffer Handle = 0, Screen is cleared immediatly
0098.00 * Low Level Environment = 0, Default low level environment
is used
0099.00 * Error Code - To store the return error code
0100.00 C EVAL SCREEN = CLRSCREEN('4' : 0 :
0 : ERROR)
0101.00
0102.00 C CLRSCRE ENDSR
0103.00
0104.00
********************************************************************
0105.00
0106.00
********************************************************************
0107.00 * Subroutine to write the Headings
0108.00 C HEADSR BEGSR
0109.00 *Write the Screen Heading for the first time
0110.00 C EVAL HEADTEXT = C_HEAD
0111.00 C EVAL ROW = 2
0112.00 C EVAL COLUMN = 25
0113.00 C EXSR WRTHEADSR
0114.00
0115.00 C EVAL HEADTEXT = '================'
0116.00 C EVAL ROW = 3
0117.00 C EVAL COLUMN = 25
0118.00 C EXSR WRTHEADSR
0119.00
0120.00 *Write the Column Heading
0121.00 C EVAL HEADTEXT = 'EMP NAME'
0122.00 C EVAL ROW = 4
0123.00 C EVAL COLUMN = 3
0124.00 C EXSR WRTHEADSR
0125.00
0126.00 C EVAL HEADTEXT = '==========='
0127.00 C EVAL ROW = 5
0128.00 C EVAL COLUMN = 3
0129.00 C EXSR WRTHEADSR
0130.00
0131.00 C EVAL HEADTEXT = 'EMP SEX'
0132.00 C EVAL ROW = 4
0133.00 C EVAL COLUMN = 16
0134.00 C EXSR WRTHEADSR
0135.00
0136.00 C EVAL HEADTEXT = '==========='
0137.00 C EVAL ROW = 5
0138.00 C EVAL COLUMN = 16
0139.00 C EXSR WRTHEADSR
0140.00
0141.00 C EVAL HEADTEXT = 'EMP ADDRESS'
0142.00 C EVAL ROW = 4
0143.00 C EVAL COLUMN = 32
0144.00 C EXSR WRTHEADSR
0145.00
0146.00 C EVAL HEADTEXT = '==========='
0147.00 C EVAL ROW = 5
0148.00 C EVAL COLUMN = 32
0149.00 C EXSR WRTHEADSR
0150.00
0151.00 C EVAL HEADTEXT = 'EMP STATE'
0152.00 C EVAL ROW = 4
0153.00 C EVAL COLUMN = 70
0154.00 C EXSR WRTHEADSR
0155.00
0156.00 C EVAL HEADTEXT = '========='
0157.00 C EVAL ROW = 5
0158.00 C EVAL COLUMN = 70
0159.00 C EXSR WRTHEADSR
0160.00
0161.00 C HEADSRE ENDSR
0162.00
0163.00
********************************************************************
0164.00
0165.00
********************************************************************
0166.00 * Subroutine to write the Screen Footer
0167.00 C FOOTSR BEGSR
0168.00 C EVAL HEADTEXT = 'F1-Help'
0169.00 C EVAL ROW = 25
0170.00 C EVAL COLUMN = 5
0171.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0172.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0173.00
0174.00 C EVAL HEADTEXT = 'F3-Exit'
0175.00 C EVAL ROW = 25
0176.00 C EVAL COLUMN = 17
0177.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0178.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0179.00
0180.00 C EVAL HEADTEXT = 'F6-Add'
0181.00 C EVAL ROW = 25
0182.00 C EVAL COLUMN = 31
0183.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0184.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0185.00
0186.00 C FOOTSRE ENDSR
0187.00
0188.00
********************************************************************
0189.00
0190.00
********************************************************************
0191.00 * Subroutine to write the Data
0192.00 C WRTDTASR BEGSR
0193.00
0194.00 C READ EMPR
90
0195.00 C DOW *IN90 = *OFF
0196.00
0197.00 C EVAL WSTEXT = EMPNAM
0198.00 C EVAL TEXTLENGTH = %LEN(EMPNAM)
0199.00 C EVAL ROW = ROWCNT
0200.00 C EVAL COLUMN = COL3
0201.00 C EXSR DATASR
0202.00
0203.00 C EVAL WSTEXT = EMPSEX
0204.00 C EVAL TEXTLENGTH = %LEN(EMPSEX)
0205.00 C EVAL ROW = ROWCNT
0206.00 C EVAL COLUMN = COL16
0207.00 C EXSR DATASR
0208.00
0209.00 C EVAL WSEMPADDR1 = EMPADDR1
0210.00 C EVAL WSEMPADDR2 = EMPADDR2
0211.00 C EVAL WSEMPADDR = WSEMPADDR1 +
WSEMPADDR2
0212.00 C EVAL WSTEXT = WSEMPADDR
0213.00 C EVAL TEXTLENGTH = %LEN(WSEMPADDR)
0214.00 C EVAL ROW = ROWCNT
0215.00 C EVAL COLUMN = COL32
0216.00 C EXSR DATASR
0217.00
0218.00 C EVAL WSTEXT = EMPSTATE
0219.00 C EVAL TEXTLENGTH = %LEN(EMPSTATE)
0220.00 C EVAL ROW = ROWCNT
0221.00 C EVAL COLUMN = COL70
0222.00 C EXSR DATASR
0223.00
0224.00 C READ EMPR
90
0225.00 C EVAL ROWCNT = ROWCNT + 1
0226.00 C ENDDO
0227.00 C* EVAL SCREEN =
ROLLUP(LINES:1:24:0:0:ERROR)
0228.00
0229.00 C EVAL WF1 = GETAID (AID : 0 :
ERROR)
0230.00 C IF AID = F_EXIT
0231.00 C EVAL *INLR = *ON
0232.00 C ENDIF
0233.00
0234.00 C WRTDTASRE ENDSR
0235.00
0236.00
********************************************************************
0237.00 * Subroutine to write the Data
0238.00 C DATASR BEGSR
0239.00
0240.00 C EVAL SCREEN =
WRTDATA(WSTEXT:TEXTLENGTH:0:ROW:
0241.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0242.00
0243.00 *Clear the work variables
0244.00 C MOVE *ZEROS TEXTLENGTH
0245.00 C MOVE *ZEROS WSEMPNO
0246.00 C MOVE *BLANKS WSTEXT
0247.00 C MOVE *BLANKS WSEMPADDR1
0248.00 C MOVE *BLANKS WSEMPADDR2
0249.00 C MOVE *BLANKS WSEMPADDR
0250.00 C MOVE *BLANKS WSEMPSTATE
0251.00 C MOVE *BLANKS WSEMPNAM
0252.00 C MOVE *BLANKS WSEMPSEX
0253.00
0254.00 C DATASRE ENDSR
0255.00
0256.00
********************************************************************
0257.00
********************************************************************
0258.00 * Subroutine to write the Headers
0259.00 C WRTHEADSR BEGSR
0260.00
0261.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0262.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0263.00 C EVAL HEADTEXT = *BLANKS
0264.00
0265.00 C WRTHEADSRE ENDSR
0266.00
****************** End of data
***********************************************************
EMPPF - Physical File
0001.00 A UNIQUE
0002.00 A R EMPR
0003.00 A EMPNO 5P 0
0004.00 A EMPNAM 20A
0005.00 A EMPSEX 1A
0006.00 A EMPAGE 3P 0
0007.00 A EMPADDR1 25A
0008.00 A EMPADDR2 25A
0009.00 A EMPSTATE 10A
0010.00 A K EMPNO
To clear subfile, we can use 'QsnClrScr' API.
To write data to the display file we can use 'QsnWrtDta' API.
This is just the starting point for everyone to develop on DSMs.
0001.00
0002.00 FEMPPF IF E K DISK
0003.00
0004.00 * Function Keys
0005.00 D F_HELP C X'31'
F1 - KEY
0006.00 D F_EXIT C X'33'
F3 - KEY
0007.00
0008.00 * Program Constants
0009.00 D TEMP C X'20'
0010.00 D TEMP1 C X'00'
0011.00 D C_HEAD C CONST('EMPLOYEE
DETAILS')
0012.00
0013.00 * Work Variables
0014.00 D HEADTEXT S 128A
0015.00 D WSTEXT S 128A
0016.00 D WSEMPNO S LIKE(EMPNO)
0017.00 D WSEMPNAM S LIKE(EMPNAM)
0018.00 D WSEMPSEX S LIKE(EMPSEX)
0019.00 D WSEMPAGE S LIKE(EMPAGE)
0020.00 D WSEMPADDR1 S LIKE(EMPADDR1)
0021.00 D WSEMPADDR2 S LIKE(EMPADDR2)
0022.00 D WSEMPSTATE S LIKE(EMPSTATE)
0023.00 D WSEMPADDR S 55A
0024.00 D TEXTLENGTH S 9B 0 INZ(32)
0025.00 D ROW S 9B 0 INZ
0026.00 D COLUMN S 9B 0 INZ
0027.00 D ROWCNT S 9 0 INZ(6)
0028.00 D COL3 S 9 0 INZ(3)
0029.00 D COL7 S 9 0 INZ(7)
0030.00 D COL16 S 9 0 INZ(16)
0031.00 D COL26 S 9 0 INZ(26)
0032.00 D COL32 S 9 0 INZ(32)
0033.00 D COL35 S 9 0 INZ(35)
0034.00 D COL70 S 9 0 INZ(70)
0035.00 D ERROR S 8
INZ(x'0000000000000000')
0036.00 D AID S 1
0037.00 D LINES S 9B 0 inz(1)
0038.00 D WF1 S 1
0039.00 D SCREEN S 9B 0
0040.00
0041.00 * API to clear the screen
0042.00 D CLRSCREEN PR 9B 0 EXTPROC('QsnClrScr')
0043.00 D MODE 1A OPTIONS(*NOPASS)
CONST MODE
0044.00 D CMDBUFHNDLE 9B 0 OPTIONS(*NOPASS)
CONST COMMAND BUFFER HANDLE
0045.00 D LOWLVLENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0046.00 D ERRCDE 8A OPTIONS(*NOPASS)
ERROR CODE
0047.00
0048.00 * API to write data to the screen
0049.00 D WRTDATA PR 9B 0 EXTPROC('QsnWrtDta')
0050.00 D DATA 128
DATA TO BE WRITTEN
0051.00 D DATALEN 9B 0
LENGTH OF THE DATA
0052.00 D FEILDID 9B 0 OPTIONS(*NOPASS)
CONST FIELD ID
0053.00 D ROW 9B 0 OPTIONS(*NOPASS)
CONST ROW
0054.00 D COLUMN 9B 0 OPTIONS(*NOPASS)
CONST COLUMN
0055.00 D STRMATR 1A OPTIONS(*NOPASS)
CONST STARTING MONOCHROME ATTRIBUTE
0056.00 D ENDMATR 1A OPTIONS(*NOPASS)
CONST ENDING MONOCHROME ATTRIBUTE
0057.00 D STRCOLATR 1A OPTIONS(*NOPASS)
CONST STARTING COLOR ATTRIBUTE
0058.00 D ENDCOLATR 1A OPTIONS(*NOPASS)
CONST ENDING COLOR ATTRIBUTE
0059.00 D CMDBUFHNDLE 9B 0 OPTIONS(*NOPASS)
CONST COMMAND BUFFER HANDLE
0060.00 D LOWLVLENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0061.00 D ERRCDE 8A OPTIONS(*NOPASS)
ERROR CODE
0062.00
0063.00 D GetAID PR 1A EXTPROC('QsnGetAID')
0064.00 D AID 1A OPTIONS(*NOPASS)
0065.00 D ENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0066.00 D ERRCDE 8A OPTIONS(*NOPASS)
ERROR CODE
0067.00
0068.00 D RollUp PR 9B 0 EXTPROC('QsnRollUp')
0069.00 D LINES 9B 0
CONST
0070.00 D TOP 9B 0
CONST
0071.00 D BOTTOM 9B 0
CONST
0072.00 D CMDBUFHNDLE 9B 0 OPTIONS(*NOPASS)
CONST COMMAND BUFFER HANDLE
0073.00 D LOWLVLENV 9B 0 OPTIONS(*NOPASS)
CONST LOW LEVEL ENVIRONMENT
0074.00 D ERRCDE 8 OPTIONS(*NOPASS)
ERROR CODE
0075.00
0076.00
********************************************************************
0077.00
0078.00 *Clear Screen Subroutine
0079.00 C EXSR CLRSCR
0080.00 *Subroutine to write the Screen Headings
0081.00 C EXSR HEADSR
0082.00 *Subroutine to write the Screen Footer
0083.00 C EXSR FOOTSR
0084.00 *Subroutine to write the Data
0085.00 C EXSR WRTDTASR
0086.00
0087.00 C EVAL *INLR = *ON
0088.00 C RETURN
0089.00
0090.00
********************************************************************
0091.00 * Clear Screen Subroutine
0092.00 C CLRSCR BEGSR
0093.00
0094.00 * Call the CLRSCR procedure to clear the screen initially
0095.00 * The values passed are
0096.00 * Mode = 4, Set the screen to 27 * 132 mode
0097.00 * Command Buffer Handle = 0, Screen is cleared immediatly
0098.00 * Low Level Environment = 0, Default low level environment
is used
0099.00 * Error Code - To store the return error code
0100.00 C EVAL SCREEN = CLRSCREEN('4' : 0 :
0 : ERROR)
0101.00
0102.00 C CLRSCRE ENDSR
0103.00
0104.00
********************************************************************
0105.00
0106.00
********************************************************************
0107.00 * Subroutine to write the Headings
0108.00 C HEADSR BEGSR
0109.00 *Write the Screen Heading for the first time
0110.00 C EVAL HEADTEXT = C_HEAD
0111.00 C EVAL ROW = 2
0112.00 C EVAL COLUMN = 25
0113.00 C EXSR WRTHEADSR
0114.00
0115.00 C EVAL HEADTEXT = '================'
0116.00 C EVAL ROW = 3
0117.00 C EVAL COLUMN = 25
0118.00 C EXSR WRTHEADSR
0119.00
0120.00 *Write the Column Heading
0121.00 C EVAL HEADTEXT = 'EMP NAME'
0122.00 C EVAL ROW = 4
0123.00 C EVAL COLUMN = 3
0124.00 C EXSR WRTHEADSR
0125.00
0126.00 C EVAL HEADTEXT = '==========='
0127.00 C EVAL ROW = 5
0128.00 C EVAL COLUMN = 3
0129.00 C EXSR WRTHEADSR
0130.00
0131.00 C EVAL HEADTEXT = 'EMP SEX'
0132.00 C EVAL ROW = 4
0133.00 C EVAL COLUMN = 16
0134.00 C EXSR WRTHEADSR
0135.00
0136.00 C EVAL HEADTEXT = '==========='
0137.00 C EVAL ROW = 5
0138.00 C EVAL COLUMN = 16
0139.00 C EXSR WRTHEADSR
0140.00
0141.00 C EVAL HEADTEXT = 'EMP ADDRESS'
0142.00 C EVAL ROW = 4
0143.00 C EVAL COLUMN = 32
0144.00 C EXSR WRTHEADSR
0145.00
0146.00 C EVAL HEADTEXT = '==========='
0147.00 C EVAL ROW = 5
0148.00 C EVAL COLUMN = 32
0149.00 C EXSR WRTHEADSR
0150.00
0151.00 C EVAL HEADTEXT = 'EMP STATE'
0152.00 C EVAL ROW = 4
0153.00 C EVAL COLUMN = 70
0154.00 C EXSR WRTHEADSR
0155.00
0156.00 C EVAL HEADTEXT = '========='
0157.00 C EVAL ROW = 5
0158.00 C EVAL COLUMN = 70
0159.00 C EXSR WRTHEADSR
0160.00
0161.00 C HEADSRE ENDSR
0162.00
0163.00
********************************************************************
0164.00
0165.00
********************************************************************
0166.00 * Subroutine to write the Screen Footer
0167.00 C FOOTSR BEGSR
0168.00 C EVAL HEADTEXT = 'F1-Help'
0169.00 C EVAL ROW = 25
0170.00 C EVAL COLUMN = 5
0171.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0172.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0173.00
0174.00 C EVAL HEADTEXT = 'F3-Exit'
0175.00 C EVAL ROW = 25
0176.00 C EVAL COLUMN = 17
0177.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0178.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0179.00
0180.00 C EVAL HEADTEXT = 'F6-Add'
0181.00 C EVAL ROW = 25
0182.00 C EVAL COLUMN = 31
0183.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0184.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0185.00
0186.00 C FOOTSRE ENDSR
0187.00
0188.00
********************************************************************
0189.00
0190.00
********************************************************************
0191.00 * Subroutine to write the Data
0192.00 C WRTDTASR BEGSR
0193.00
0194.00 C READ EMPR
90
0195.00 C DOW *IN90 = *OFF
0196.00
0197.00 C EVAL WSTEXT = EMPNAM
0198.00 C EVAL TEXTLENGTH = %LEN(EMPNAM)
0199.00 C EVAL ROW = ROWCNT
0200.00 C EVAL COLUMN = COL3
0201.00 C EXSR DATASR
0202.00
0203.00 C EVAL WSTEXT = EMPSEX
0204.00 C EVAL TEXTLENGTH = %LEN(EMPSEX)
0205.00 C EVAL ROW = ROWCNT
0206.00 C EVAL COLUMN = COL16
0207.00 C EXSR DATASR
0208.00
0209.00 C EVAL WSEMPADDR1 = EMPADDR1
0210.00 C EVAL WSEMPADDR2 = EMPADDR2
0211.00 C EVAL WSEMPADDR = WSEMPADDR1 +
WSEMPADDR2
0212.00 C EVAL WSTEXT = WSEMPADDR
0213.00 C EVAL TEXTLENGTH = %LEN(WSEMPADDR)
0214.00 C EVAL ROW = ROWCNT
0215.00 C EVAL COLUMN = COL32
0216.00 C EXSR DATASR
0217.00
0218.00 C EVAL WSTEXT = EMPSTATE
0219.00 C EVAL TEXTLENGTH = %LEN(EMPSTATE)
0220.00 C EVAL ROW = ROWCNT
0221.00 C EVAL COLUMN = COL70
0222.00 C EXSR DATASR
0223.00
0224.00 C READ EMPR
90
0225.00 C EVAL ROWCNT = ROWCNT + 1
0226.00 C ENDDO
0227.00 C* EVAL SCREEN =
ROLLUP(LINES:1:24:0:0:ERROR)
0228.00
0229.00 C EVAL WF1 = GETAID (AID : 0 :
ERROR)
0230.00 C IF AID = F_EXIT
0231.00 C EVAL *INLR = *ON
0232.00 C ENDIF
0233.00
0234.00 C WRTDTASRE ENDSR
0235.00
0236.00
********************************************************************
0237.00 * Subroutine to write the Data
0238.00 C DATASR BEGSR
0239.00
0240.00 C EVAL SCREEN =
WRTDATA(WSTEXT:TEXTLENGTH:0:ROW:
0241.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0242.00
0243.00 *Clear the work variables
0244.00 C MOVE *ZEROS TEXTLENGTH
0245.00 C MOVE *ZEROS WSEMPNO
0246.00 C MOVE *BLANKS WSTEXT
0247.00 C MOVE *BLANKS WSEMPADDR1
0248.00 C MOVE *BLANKS WSEMPADDR2
0249.00 C MOVE *BLANKS WSEMPADDR
0250.00 C MOVE *BLANKS WSEMPSTATE
0251.00 C MOVE *BLANKS WSEMPNAM
0252.00 C MOVE *BLANKS WSEMPSEX
0253.00
0254.00 C DATASRE ENDSR
0255.00
0256.00
********************************************************************
0257.00
********************************************************************
0258.00 * Subroutine to write the Headers
0259.00 C WRTHEADSR BEGSR
0260.00
0261.00 C EVAL SCREEN =
WRTDATA(HEADTEXT:TEXTLENGTH:0:ROW:
0262.00 C
COLUMN:TEMP:TEMP:TEMP:TEMP:0:0:ERROR)
0263.00 C EVAL HEADTEXT = *BLANKS
0264.00
0265.00 C WRTHEADSRE ENDSR
0266.00
****************** End of data
***********************************************************
EMPPF - Physical File
0001.00 A UNIQUE
0002.00 A R EMPR
0003.00 A EMPNO 5P 0
0004.00 A EMPNAM 20A
0005.00 A EMPSEX 1A
0006.00 A EMPAGE 3P 0
0007.00 A EMPADDR1 25A
0008.00 A EMPADDR2 25A
0009.00 A EMPSTATE 10A
0010.00 A K EMPNO
Tuesday, March 25, 2008
Quick way to parse Spoolfile text in Excel:
If you want to parse spooled file rows, such as "Employee Number Pay Rule Hours Worked" into separate columns, you don't need to use fancy formulas. Excel has a tool that makes the job a snap. It is needed only to use the Excel's Text To Columns feature.
Here's how it works:
First, select the column of cells that contains the raw data, and then open the Data menu and choose Text To Columns. When you do, Excel launches the Convert Text To Columns Wizard.
Choose the type that best describes the data and click Next, if you choose Delimited radio button, activate the check box for Space (deselect Tab, which is the default selection), since the delimiter in most cases is simply the space between the number and the label. If you choose "Fixed width", you have the opportunity to move, create or delete break columns.
You can click the Next button if you want to read the next Wizard screen. When you do, Excel will convert the tokens into separate columns and the numbers will be stored as values. You can also click the "Advanced button" to define the decimal and thousand separators and to define that the trailing minus is for negative numbers.
Here's how it works:
First, select the column of cells that contains the raw data, and then open the Data menu and choose Text To Columns. When you do, Excel launches the Convert Text To Columns Wizard.
Choose the type that best describes the data and click Next, if you choose Delimited radio button, activate the check box for Space (deselect Tab, which is the default selection), since the delimiter in most cases is simply the space between the number and the label. If you choose "Fixed width", you have the opportunity to move, create or delete break columns.
You can click the Next button if you want to read the next Wizard screen. When you do, Excel will convert the tokens into separate columns and the numbers will be stored as values. You can also click the "Advanced button" to define the decimal and thousand separators and to define that the trailing minus is for negative numbers.
Wednesday, March 19, 2008
Customer Satisfaction:
Customer satisfaction, a business term, is a measure of how products and services supplied by a company meet or surpass customer expectation. It is seen as a key performance indicator within business.
In a competitive marketplace where businesses compete for customers, customer satisfaction is seen as a key differentiator and increasingly has become a key element of business strategy.
Measuring customer satisfaction provides an indication of how successful the organization is at providing products and/or services to the marketplace. Customer satisfaction is an ambiguous and abstract concept and the actual manifestation of the state of satisfaction will vary from person to person and product/service to product/service. The state of satisfaction depends on a number of both psychological and physical variables which correlate with satisfaction behaviors such as return and recommend rate. The level of satisfaction can also vary depending on other options the customer may have and other products against which the customer can compare the organization's products.
Ten domains of satisfaction include: Quality, Value, Timeliness, Efficiency, Ease of Access, Environment, Inter-departmental Teamwork, Front line Service Behaviors, Commitment to the Customer and Innovation. These factors are emphasized for continuous improvement and organizational change measurement and are most often utilized to develop the architecture for satisfaction measurement as an integrated model.
The usual measures of customer satisfaction involve a survey with a set of statements using a Likert Technique or scale. The customer is asked to evaluate each statement and in term of their perception and expectation the of the performance of the organization being measured.
In a competitive marketplace where businesses compete for customers, customer satisfaction is seen as a key differentiator and increasingly has become a key element of business strategy.
Measuring customer satisfaction provides an indication of how successful the organization is at providing products and/or services to the marketplace. Customer satisfaction is an ambiguous and abstract concept and the actual manifestation of the state of satisfaction will vary from person to person and product/service to product/service. The state of satisfaction depends on a number of both psychological and physical variables which correlate with satisfaction behaviors such as return and recommend rate. The level of satisfaction can also vary depending on other options the customer may have and other products against which the customer can compare the organization's products.
Ten domains of satisfaction include: Quality, Value, Timeliness, Efficiency, Ease of Access, Environment, Inter-departmental Teamwork, Front line Service Behaviors, Commitment to the Customer and Innovation. These factors are emphasized for continuous improvement and organizational change measurement and are most often utilized to develop the architecture for satisfaction measurement as an integrated model.
The usual measures of customer satisfaction involve a survey with a set of statements using a Likert Technique or scale. The customer is asked to evaluate each statement and in term of their perception and expectation the of the performance of the organization being measured.
Tuesday, March 18, 2008
Avoid Divide by Zero error in Query and SQL:
Many times when doing mass updates or query reports, we run into situations where a "divide by zero" error occurs. Sure, we can put all sorts of error trapping and the like, but there may be conditions where the value we are using to divide is zero. Add a miniscule value to the field that may be zero.
(profit/(prfinv+.0001) ) *100
It doesn't affect the calculation and protects from program crashes. The same method can be applied when using a calculated value in SQL.
(profit/(prfinv+.0001) ) *100
It doesn't affect the calculation and protects from program crashes. The same method can be applied when using a calculated value in SQL.
Monday, March 17, 2008
What determines when a job log will be created?
Every job that runs on your server has an associated job log that records its activities. A job log can contain the following:
· The commands in the job
· The commands in a control language (CL) program
· All messages associated with that job
There are several ways within OS/400 to specify the creation of or restrict the creation of a job log.
The message logging parameters on the job description and for an active job determine what kind of information will be collected:
Message logging: LOG
Level . . . . . . . . . . . . 4
Severity . . . . . . . . . . . 00
Text . . . . . . . . . . . . . *NOLIST
Log CL program commands . . . . LOGCLPGM *NO
If the message logging 'TEXT' parameter is set to *NOLIST, a job log will be created only if the job ends abnormally. If the job completes normally, no job log will be created. This is the same whether the job is an interactive or batch job.
If any value other than *NOLIST is specified for the message logging 'TEXT' parameter in a batch job, a job log will always be produced -- whether the job ends normally or abnormally.
This works differently for interactive jobs, though. To conserve disk space consumed by job logs, the SIGNOFF command may be defined as
Sign Off (SIGNOFF)
Type choices, press Enter.
Job log . . . . . . . . . . . . LOG *NOLIST
Drop line . . . . . . . . . . . DROP *DEVD
End connection . . . . . . . . . ENDCNN *NO
So, by default, when an interactive job is ended normally, no job log will be produced as specified by the LOG(*NOLIST) parameter. However, if an interactive job ends abnormally, a job log will be produced.
This job log is usually stored in the QEZJOBLOG output queue in library QUSRSYS. You can determine where your job log output goes with this command:
DSPFD FILE(QPJOBLOG)
Scroll down to the Spooling Description section of the Display File Description listing to see which output queue job log output will be directed to:
Spooling Description
Spooled output queue . . . . . . . . . . . : OUTQ QEZJOBLOG
Library . . . . . . . . . . . . . . . . . : QUSRSYS
If you want to always force the creation of a job log from an interactive job, you can do it in one of two ways:
1. When you sign off enter SIGNOFF LOG(*LIST) instead of using the default.
2. Prior to signing off enter DSPJOBLOG OUTPUT(*PRINT).
When one of those options is used, a job log will always be created from an interactive job.
Now that you understand how job logs are created on the iSeries, let's look at how to view one?
To view a job log that has already been created, use one of the following commands:
DSPSPLF FILE(QPJOBLOG) JOB(job_number/Usrid/Job_name)
DSPJOB JOB(job_number/Usrid/Job_name) OPTION(*SPLF)
To see ALL job log output on the system, use this command:
WRKOUTQ OUTQ(QUSRSYS/QEZJOBLOG)
· The commands in the job
· The commands in a control language (CL) program
· All messages associated with that job
There are several ways within OS/400 to specify the creation of or restrict the creation of a job log.
The message logging parameters on the job description and for an active job determine what kind of information will be collected:
Message logging: LOG
Level . . . . . . . . . . . . 4
Severity . . . . . . . . . . . 00
Text . . . . . . . . . . . . . *NOLIST
Log CL program commands . . . . LOGCLPGM *NO
If the message logging 'TEXT' parameter is set to *NOLIST, a job log will be created only if the job ends abnormally. If the job completes normally, no job log will be created. This is the same whether the job is an interactive or batch job.
If any value other than *NOLIST is specified for the message logging 'TEXT' parameter in a batch job, a job log will always be produced -- whether the job ends normally or abnormally.
This works differently for interactive jobs, though. To conserve disk space consumed by job logs, the SIGNOFF command may be defined as
Sign Off (SIGNOFF)
Type choices, press Enter.
Job log . . . . . . . . . . . . LOG *NOLIST
Drop line . . . . . . . . . . . DROP *DEVD
End connection . . . . . . . . . ENDCNN *NO
So, by default, when an interactive job is ended normally, no job log will be produced as specified by the LOG(*NOLIST) parameter. However, if an interactive job ends abnormally, a job log will be produced.
This job log is usually stored in the QEZJOBLOG output queue in library QUSRSYS. You can determine where your job log output goes with this command:
DSPFD FILE(QPJOBLOG)
Scroll down to the Spooling Description section of the Display File Description listing to see which output queue job log output will be directed to:
Spooling Description
Spooled output queue . . . . . . . . . . . : OUTQ QEZJOBLOG
Library . . . . . . . . . . . . . . . . . : QUSRSYS
If you want to always force the creation of a job log from an interactive job, you can do it in one of two ways:
1. When you sign off enter SIGNOFF LOG(*LIST) instead of using the default.
2. Prior to signing off enter DSPJOBLOG OUTPUT(*PRINT).
When one of those options is used, a job log will always be created from an interactive job.
Now that you understand how job logs are created on the iSeries, let's look at how to view one?
To view a job log that has already been created, use one of the following commands:
DSPSPLF FILE(QPJOBLOG) JOB(job_number/Usrid/Job_name)
DSPJOB JOB(job_number/Usrid/Job_name) OPTION(*SPLF)
To see ALL job log output on the system, use this command:
WRKOUTQ OUTQ(QUSRSYS/QEZJOBLOG)
Processing Several Files with the Same File Specification:
I have an RPG program that needs to read a large number of files and perform the same processing for each of them. I may not know the file names at compile time. Some of the files have different record lengths, and some of them even have different formats, although the critical fields correspond in each file (e.g., each file format has a field called Tranamount, Signed(13.2). How can I accomplish this requirement without having to code separate file specifications for each file?
Here is the solution for the above scenario:
You can probably do what you want with a combination of the EXTFILE and USROPN F-spec keywords. If you can identify the record format with a record identification field, you can describe each format with input specifications. In the file specification, use the record length of the longest record.
Here's a skeleton of what your code might look like:
FInput IF F 1028 Disk Extfile(Inputfile)
F Usropn
// ------------------------------ Standalone variables
D Inputfile S 21
// ------------------------------ Input specifications
// Transtype = 22 Savings debit
IInput NS 22 1 C2 2 C2
I 1 2 Trantype
I 15 27 2Tranamount
// Transtype = 23 Checking debit
I NS 23 1 C2 2 C3
I 1 2 Trantype
I 35 47 2Tranamount
// Transtype = 24 Misc debit
I NS 24 1 C2 2 C4
I 1 2 Trantype
I 63 75 2Tranamount
// ---------------------------------------------------
/Free
Dou *Inlr;
// For each file to be processed,
// assign name to Inputfile.
// Library name is optional.
Inputfile = 'MYLIBRARY' + '/' + 'MYFILE';
Open Input;
Dou %Eof(Input);
Read Input;
If %Eof(Input);
*Inlr = *On;
Else;
// Process record
Endif;
Enddo;
Close Input;
Enddo;
Return;
/End-free
Here is the solution for the above scenario:
You can probably do what you want with a combination of the EXTFILE and USROPN F-spec keywords. If you can identify the record format with a record identification field, you can describe each format with input specifications. In the file specification, use the record length of the longest record.
Here's a skeleton of what your code might look like:
FInput IF F 1028 Disk Extfile(Inputfile)
F Usropn
// ------------------------------ Standalone variables
D Inputfile S 21
// ------------------------------ Input specifications
// Transtype = 22 Savings debit
IInput NS 22 1 C2 2 C2
I 1 2 Trantype
I 15 27 2Tranamount
// Transtype = 23 Checking debit
I NS 23 1 C2 2 C3
I 1 2 Trantype
I 35 47 2Tranamount
// Transtype = 24 Misc debit
I NS 24 1 C2 2 C4
I 1 2 Trantype
I 63 75 2Tranamount
// ---------------------------------------------------
/Free
Dou *Inlr;
// For each file to be processed,
// assign name to Inputfile.
// Library name is optional.
Inputfile = 'MYLIBRARY' + '/' + 'MYFILE';
Open Input;
Dou %Eof(Input);
Read Input;
If %Eof(Input);
*Inlr = *On;
Else;
// Process record
Endif;
Enddo;
Close Input;
Enddo;
Return;
/End-free
Tuesday, March 11, 2008
Transfer iSeries Files into Excel using Client Access:
The first thing you need to do is to make sure Client Access is added to Excel. Follow these steps:
1. In Microsoft Excel, select "ADD-INS" under the "TOOLS" Menu.
2. If "Client Access data transfer" is displayed check the box, otherwise do step three.
3. Click "Browse" and browse through the folders:
PROGRAM FILES
IBM
CLIENT ACCESS
SHARED (and)
CWBTFXLA should be there
Now just select the button from your tool bar to transfer data from the iSeries.
1. In Microsoft Excel, select "ADD-INS" under the "TOOLS" Menu.
2. If "Client Access data transfer" is displayed check the box, otherwise do step three.
3. Click "Browse" and browse through the folders:
PROGRAM FILES
IBM
CLIENT ACCESS
SHARED (and)
CWBTFXLA should be there
Now just select the button from your tool bar to transfer data from the iSeries.
Sorting Algorithm:
In computer science and mathematics, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Some of the popularly used sorting algorithms are given below.
Bubble Sort:
Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education.
Selection Sort:
Selection sort is a simple sorting algorithm that improves on the performance of bubble sort. It works by first finding the smallest element using a linear scan and swapping it into the first position in the list, then finding the second smallest element by scanning the remaining elements, and so on. Selection sort is unique compared to almost any other algorithm in that its running time is not affected by the prior ordering of the list: it performs the same number of operations because of its simple structure. Selection sort requires (n - 1) swaps and hence Θ(n) memory writes. However, Selection sort requires (n - 1) + (n - 2) + ... + 2 + 1 = n(n - 1) / 2 = Θ(n2) comparisons. Thus it can be very attractive if writes are the most expensive operation, but otherwise selection sort will usually be outperformed by insertion sort or the more complicated algorithms.
Insertion Sort:
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests - it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures - the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place.
Shell Sort:
Shell sort improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.
Merge Sort:
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n).
Heap Sort:
Heap sort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heap sort to run in O(n log n) time.
Quick Sort:
Quick sort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sub lists. Efficient implementations of quick sort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quick sort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quick sort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n²)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).
Bucket Sort:
Bucket sort is a sorting algorithm that works by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. A variation of this method called the single buffered count sort is faster than the quick sort and takes about the same time to run on any set of data.
Radix Sort:
Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n • k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.
Bubble Sort:
Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education.
Selection Sort:
Selection sort is a simple sorting algorithm that improves on the performance of bubble sort. It works by first finding the smallest element using a linear scan and swapping it into the first position in the list, then finding the second smallest element by scanning the remaining elements, and so on. Selection sort is unique compared to almost any other algorithm in that its running time is not affected by the prior ordering of the list: it performs the same number of operations because of its simple structure. Selection sort requires (n - 1) swaps and hence Θ(n) memory writes. However, Selection sort requires (n - 1) + (n - 2) + ... + 2 + 1 = n(n - 1) / 2 = Θ(n2) comparisons. Thus it can be very attractive if writes are the most expensive operation, but otherwise selection sort will usually be outperformed by insertion sort or the more complicated algorithms.
Insertion Sort:
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests - it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures - the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place.
Shell Sort:
Shell sort improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.
Merge Sort:
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n).
Heap Sort:
Heap sort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heap sort to run in O(n log n) time.
Quick Sort:
Quick sort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sub lists. Efficient implementations of quick sort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quick sort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quick sort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n²)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).
Bucket Sort:
Bucket sort is a sorting algorithm that works by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. A variation of this method called the single buffered count sort is faster than the quick sort and takes about the same time to run on any set of data.
Radix Sort:
Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n • k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.
Monday, March 10, 2008
Automatically getting the sequence number:
In this tip explains how to define a sequence number column to a table that gets populated by the system every time a record is added to that table.
This could be achieved by defining an identity column to that table.
The following example will create a table named TableName with one identity column SEQNUM and a character column Column2. SEQNUM will be the primary key for the new table.
SEQNUM values will be 1, 2, 3, 4, . .... ., 999999999999999.
Example:
Create Table TableName (SEQNUM NUMERIC (15, 0) not null generated always as identity (start with 001 increment by 1), column2 char (03) not null with default, primary key (seqnum))
To insert a record:
Insert into QGPL/tablename (column2) values('ABC')
The data in the table after running the insert statement three times:
SEQNUM COLUMN2
1 ABC
2 ABC
3 ABC
This could be achieved by defining an identity column to that table.
The following example will create a table named TableName with one identity column SEQNUM and a character column Column2. SEQNUM will be the primary key for the new table.
SEQNUM values will be 1, 2, 3, 4, . .... ., 999999999999999.
Example:
Create Table TableName (SEQNUM NUMERIC (15, 0) not null generated always as identity (start with 001 increment by 1), column2 char (03) not null with default, primary key (seqnum))
To insert a record:
Insert into QGPL/tablename (column2) values('ABC')
The data in the table after running the insert statement three times:
SEQNUM COLUMN2
1 ABC
2 ABC
3 ABC
Thursday, March 6, 2008
Compare Spool files:
This method allows a user to compare two spooled files for differences. This is very handy when needing to compare test runs from before and after program changes to a particular program.
1. Create two physicals file in your library, as such:
CRTPF FILE(PHSLIB/PHSSPOOL1)
RCDLEN(132)
TEXT('physical file for compared spooled file #1')
SIZE(*NOMAX)
CRTPF FILE(PHSLIB/PHSSPOOL2)
RCDLEN(132)
TEXT('physical file for compared spooled file #2')
SIZE(*NOMAX)
2. Copy the two spooled files you want to compare to the files created above.
CPYSPLF FILE(TBLD_SRCG)
TOFILE(PHSLIB/PHSSPOOL1)
JOB(046095/PHILLIP/TBLDRV_SUR)
SPLNBR(1)
CPYSPLF FILE(TBLD_SRCG)
TOFILE(PHSLIB/PHSSPOOL2)
JOB(046250/PHILLIP/TBLDRV_SUR)
SPLNBR(1)
3. Now compare the two physical files as such. You can send the output to the screen immediately, or to a spooled file to look at. In this example, it is outputting to a spooled file. The results will give you the differences between the two.
CMPPFM NEWFILE(PHSLIB/PHSSPOOL2)
OLDFILE(PHSLIB/PHSSPOOL1)
OLDMBR(*FIRST)
RPTTYPE(*CHANGE)
OUTPUT(*PRINT)
1. Create two physicals file in your library, as such:
CRTPF FILE(PHSLIB/PHSSPOOL1)
RCDLEN(132)
TEXT('physical file for compared spooled file #1')
SIZE(*NOMAX)
CRTPF FILE(PHSLIB/PHSSPOOL2)
RCDLEN(132)
TEXT('physical file for compared spooled file #2')
SIZE(*NOMAX)
2. Copy the two spooled files you want to compare to the files created above.
CPYSPLF FILE(TBLD_SRCG)
TOFILE(PHSLIB/PHSSPOOL1)
JOB(046095/PHILLIP/TBLDRV_SUR)
SPLNBR(1)
CPYSPLF FILE(TBLD_SRCG)
TOFILE(PHSLIB/PHSSPOOL2)
JOB(046250/PHILLIP/TBLDRV_SUR)
SPLNBR(1)
3. Now compare the two physical files as such. You can send the output to the screen immediately, or to a spooled file to look at. In this example, it is outputting to a spooled file. The results will give you the differences between the two.
CMPPFM NEWFILE(PHSLIB/PHSSPOOL2)
OLDFILE(PHSLIB/PHSSPOOL1)
OLDMBR(*FIRST)
RPTTYPE(*CHANGE)
OUTPUT(*PRINT)
Are you /FREE:
• Free form RPG was introduced in V5R1 of OS/400. For those of you who have not tried it yet, these are the rules for coding in free form.
• Free form code is placed between the /FREE and /END-FREE compiler directives.
• The structure of an operation is the operation code followed by Factors 1, 2, and the Result Field.
• Each statement must end with a semicolon (;).
• Operands are no longer limited to 14 characters, especially for operations that used Factor 1.
• No blanks are allowed between an operation code and extenders.
• Only one operation code may be coded on a line.
• Comments are delimited by //. Comments may be placed at the end of any free-form statement (after the ;).
• Some operation codes are not currently supported (more in a moment).
• Some operation codes (such as CALLP and EVAL) are optional, except where an extender is needed.
• Free form code is placed between the /FREE and /END-FREE compiler directives.
• The structure of an operation is the operation code followed by Factors 1, 2, and the Result Field.
• Each statement must end with a semicolon (;).
• Operands are no longer limited to 14 characters, especially for operations that used Factor 1.
• No blanks are allowed between an operation code and extenders.
• Only one operation code may be coded on a line.
• Comments are delimited by //. Comments may be placed at the end of any free-form statement (after the ;).
• Some operation codes are not currently supported (more in a moment).
• Some operation codes (such as CALLP and EVAL) are optional, except where an extender is needed.
Wednesday, March 5, 2008
Process Models:
Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like.
The goals of a process model are:
• To be Descriptive
o Track what actually happens during a process.
o Takes the point of view of an external observer who looks at the way a process has been performed and determines the improvements that have to be made to make it perform more effectively or efficiently.
• Prescriptive
o Defines the desired processes and how they should/could/might be performed.
o Lays down rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance.
• Explanatory
o Provides explanations about the rationale of processes.
o Explore and evaluate the several possible courses of action based on rational arguments.
o Establish an explicit link between processes and the requirements that the model needs to fulfill.
o Pre-defines points at which data can be extracted for reporting purposes.
Processes can be of different kinds. These definitions “correspond to the various ways in which a process can be modeled”.
• Strategic processes
o investigate alternative ways of doing a thing and eventually produce a plan for doing it
o are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities
• Tactical processes
o help in the achievement of a plan
o are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement
• Implementation processes
o are the lowest level processes
o are directly concerned with the details of the what and how of plan implementation
The goals of a process model are:
• To be Descriptive
o Track what actually happens during a process.
o Takes the point of view of an external observer who looks at the way a process has been performed and determines the improvements that have to be made to make it perform more effectively or efficiently.
• Prescriptive
o Defines the desired processes and how they should/could/might be performed.
o Lays down rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance.
• Explanatory
o Provides explanations about the rationale of processes.
o Explore and evaluate the several possible courses of action based on rational arguments.
o Establish an explicit link between processes and the requirements that the model needs to fulfill.
o Pre-defines points at which data can be extracted for reporting purposes.
Processes can be of different kinds. These definitions “correspond to the various ways in which a process can be modeled”.
• Strategic processes
o investigate alternative ways of doing a thing and eventually produce a plan for doing it
o are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities
• Tactical processes
o help in the achievement of a plan
o are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement
• Implementation processes
o are the lowest level processes
o are directly concerned with the details of the what and how of plan implementation
Monday, March 3, 2008
Special Authorities:
Assigning users only the capabilities sufficient to perform their job functions is a requirement of several laws and regulations (including PCI Data Security Standards). In addition, it makes good business sense to allow users only the capabilities that they need.
Here are the capabilities (special authorities) that we can grant users and the functions they provide:
Special Authorities and Their Functions
*AUDIT Configuration of i5/OS auditing attributes
*IOSYSCFG Communications configuration and management
*JOBCTL Management of a job on the system
*SAVSYS Ability to save and restore the entire system or any object on the system, regardless of authority to the object
*SECADM Create/change/delete user profiles
*SERVICE Ability to use Service Tools, perform a service trace, debug another user's job
*SPLCTL Access to every spooled file on the system regardless of authority to the outq (the "*ALLOBJ" of spooled files)
*ALLOBJ Access to every object on the system. It is impossible to prevent an *ALLOBJ user from accessing an object!
Here are the capabilities (special authorities) that we can grant users and the functions they provide:
Special Authorities and Their Functions
*AUDIT Configuration of i5/OS auditing attributes
*IOSYSCFG Communications configuration and management
*JOBCTL Management of a job on the system
*SAVSYS Ability to save and restore the entire system or any object on the system, regardless of authority to the object
*SECADM Create/change/delete user profiles
*SERVICE Ability to use Service Tools, perform a service trace, debug another user's job
*SPLCTL Access to every spooled file on the system regardless of authority to the outq (the "*ALLOBJ" of spooled files)
*ALLOBJ Access to every object on the system. It is impossible to prevent an *ALLOBJ user from accessing an object!
Sunday, March 2, 2008
Biometrics:
Biometrics is the application of statistics and mathematics to problems with a biological component. In retail, these are usually fingerprint, voice, signature and other similar recognition methods.
Biometric characteristics can be divided in two main classes.
• Physiological are related to the shape of the body. The oldest traits that have been used for more than 100 years are fingerprints. Other examples are face recognition, hand geometry and iris recognition.
• Behavioral are related to the behavior of a person. The first characteristic to be used, still widely used today, is the signature. More modern approaches are the study of keystroke dynamics and of voice.
The diagram shows a simple block diagram of a biometric system. The main operations a system can perform are enrollment and test. During the enrollment, biometric information from an individual is stored. During the test, biometric information is detected and compared with the stored information.
The first block (sensor) is the interface between the real world and our system; it has to acquire all the necessary data. Most of the times it is an image acquisition system, but it can change according to the characteristics desired. The second block performs all the necessary pre-processing: it has to remove artifacts from the sensor, to enhance the input (e.g. removing background noise), to use some kind of normalization, etc. In the third block features needed are extracted. This step is an important step as the correct features need to be extracted and the optimal way. A vector of numbers or an image with particular properties is used to create a template. If enrollment is being performed the template is simply stored somewhere (on a card or within a database or both). If a matching phase is being performed, the obtained template is passed to a matcher that compares it with other existing templates, estimating the distance between them using any algorithm (e.g. Hamming distance). The matching program will analyze the template with the input. This will then be output for any specified use or purpose (e.g. entrance in a restricted area).
Subscribe to:
Posts (Atom)