You can make subfile records available in a called program by defining the same display file in both the calling and called programs and sharing the open data path (ODP).
For example, suppose PgmA, which uses display file DspA, calls PgmB.
To make DspA's subfile records available to PgmB, define file DspA in PgmB. Then issue the command OVRDSPF FILE(DspA) SHARE(*YES) before calling PgmA. When PgmA calls PgmB, the ODP is shared and PgmB can read records from DspA. You don't need to write any of DspA's record formats from PgmB. However, for PgmB to compile, you need to reference at least one of them. Simply write an EXFMT statement and condition it so that it will never be executed.
Tuesday, September 25, 2007
Ins and Outs of Journaling:
Journaling on the iSeries typically involves the recording of the activity related to files, namely, physical files. When a file is being journaled, activity such as file-opens, file-closes and data updates are recorded. For example, when a program writes out a new record or updates an existing one, the system makes an entry in the associated journal. The entry contains such information as the job name and program name that made the change(s) to the file, as well as a copy of the record that was changed.
How to set up journaling in three easy steps
1. Create one or more journal receivers. Use the Create Journal Receiver (CRTJRNRCV) command to create a journal receiver. Think of the journal as a notebook binder and the journal receiver as the pages (i.e., the notebook paper where information is written). The journal receiver is where the journal entries are actually recorded. The journal "connects" the receiver to the file. It's a good habit to name the journal receiver the same as the journal, plus a numeric suffix such as 0 or 1. Also, you should put journal receivers in the same library as the file. For maximum protection, consider storing the journal receiver in a different ASP than the file so that their storage will not be on the same disk unit.
2. Create a journal. Use the Create Journal (CRTJRN) command to create a journal and specify the receiver created in step 1. Although you can journal multiple files to the same journal (and, in some cases, that is actually preferable), you will generally want to have a journal "serving" a single file. A good practice is to name the journal the same as the file and put it in the same library as the file, but store it in the same ASP as the journal receiver.
3. Start journaling the file. This is done by using the Start Journal Physical File (STRJRNPF) command. This is how you associate a file to a journal. Once the association is made, the system will record in the journal receiver a copy of any record added, updated or deleted from the file. Other activity, such as when the file is opened and closed, can also be recorded in the journal receiver if you choose by selecting the appropriate options on the STRJRNPF command.
Four basic journal entry categories
The most common journal entries fall into four basic categories. Each category is represented by a one-character code (shown in parenthesis in the following list). Within each category are a number of different journal entry types. Each journal entry type is represented by a two-character entry code (also shown in parenthesis within the category descriptions below).
1. Journal and journal receiver operations (J). These include such things as references to the previous receiver (PR) or the next receiver (NR) in a chain. Also, at IPL-time, an entry is made (e.g., an IN entry for IPL after normal end) marking a critical chronological boundary in the file activity.
2. File operations (F). This category includes file opens (OP) and file closes (CL).
3. Record operations (R). Record updates (UP), deletes (DL), and new records written (PT and PX) all fall into this category.
4. Commitment control (C). Anything related to commitment control falls into this category. Some examples are begin commitment control (BC), start a commit cycle (SC), commit operation (CM) and rollback operation (RB).
How to set up journaling in three easy steps
1. Create one or more journal receivers. Use the Create Journal Receiver (CRTJRNRCV) command to create a journal receiver. Think of the journal as a notebook binder and the journal receiver as the pages (i.e., the notebook paper where information is written). The journal receiver is where the journal entries are actually recorded. The journal "connects" the receiver to the file. It's a good habit to name the journal receiver the same as the journal, plus a numeric suffix such as 0 or 1. Also, you should put journal receivers in the same library as the file. For maximum protection, consider storing the journal receiver in a different ASP than the file so that their storage will not be on the same disk unit.
2. Create a journal. Use the Create Journal (CRTJRN) command to create a journal and specify the receiver created in step 1. Although you can journal multiple files to the same journal (and, in some cases, that is actually preferable), you will generally want to have a journal "serving" a single file. A good practice is to name the journal the same as the file and put it in the same library as the file, but store it in the same ASP as the journal receiver.
3. Start journaling the file. This is done by using the Start Journal Physical File (STRJRNPF) command. This is how you associate a file to a journal. Once the association is made, the system will record in the journal receiver a copy of any record added, updated or deleted from the file. Other activity, such as when the file is opened and closed, can also be recorded in the journal receiver if you choose by selecting the appropriate options on the STRJRNPF command.
Four basic journal entry categories
The most common journal entries fall into four basic categories. Each category is represented by a one-character code (shown in parenthesis in the following list). Within each category are a number of different journal entry types. Each journal entry type is represented by a two-character entry code (also shown in parenthesis within the category descriptions below).
1. Journal and journal receiver operations (J). These include such things as references to the previous receiver (PR) or the next receiver (NR) in a chain. Also, at IPL-time, an entry is made (e.g., an IN entry for IPL after normal end) marking a critical chronological boundary in the file activity.
2. File operations (F). This category includes file opens (OP) and file closes (CL).
3. Record operations (R). Record updates (UP), deletes (DL), and new records written (PT and PX) all fall into this category.
4. Commitment control (C). Anything related to commitment control falls into this category. Some examples are begin commitment control (BC), start a commit cycle (SC), commit operation (CM) and rollback operation (RB).
Monday, September 24, 2007
Team Development Phases:
The use of teams is critical in a quality management environment. As a result, understanding the team life cycle is important in order to set proper expectations for the team and to help it communicate and function effectively. Teams go through four phases:
1. Forming
In this first stage, teams are dominated by feelings of confusion and anxiety, and are not able to focus on their purpose for long. Individuals may come to the team proud to be selected, but wondering why, and wondering about the other members. Information will be solicited and shared, and hidden agendas add to the uncertainty. Key accomplishments of this phase are identifying roles for team members, clarifying responsibilities and accepted behavior, and defining the team's purpose.
2. Storming
Conflict, defensiveness, and competition are key during this stage. Team members still think individually and wrestle with loyalties outside the team. As ideas emerge, they are attacked and defended. There may be confrontations, disagreements, and fluctuating attitudes over the likelihood of achieving the team's purpose. Barriers will be examined and the team will focus on well-known observations and common beliefs. Some people will not participate to prevent unfavorable responses, and others will test the leader's authority and form cliques.
3. Norming
In this stage the individuals start to become a team. Personal agendas, concerns, and loyalties are minimized. People are discussed less often than the issues, conflicts are resolved constructively, and the team focuses on its real purpose. As trust develops, riskier ideas are proposed and feelings exchanged. The willingness to discuss for the sake of the team grows, which results in better communication and cooperation.
4. Conforming
During this final stage, the team has matured into a cohesive unit. Individual strengths and weaknesses are understood and appreciated, leading to an overall satisfaction with the team membership. As steps are made toward the team's goals, there is individual learning and growth, and people feel satisfied with progress.
There are many variables that affect the length of time a team spends in each of these stages. The team experience of individual members is a big factor, and use of a facilitator can help. Clarity of the team's purpose and the level of management support are other factors. Teams may also get to the norming or conforming stages and fall back to earlier stages if assumptions are found to be incorrect or team membership changes.
1. Forming
In this first stage, teams are dominated by feelings of confusion and anxiety, and are not able to focus on their purpose for long. Individuals may come to the team proud to be selected, but wondering why, and wondering about the other members. Information will be solicited and shared, and hidden agendas add to the uncertainty. Key accomplishments of this phase are identifying roles for team members, clarifying responsibilities and accepted behavior, and defining the team's purpose.
2. Storming
Conflict, defensiveness, and competition are key during this stage. Team members still think individually and wrestle with loyalties outside the team. As ideas emerge, they are attacked and defended. There may be confrontations, disagreements, and fluctuating attitudes over the likelihood of achieving the team's purpose. Barriers will be examined and the team will focus on well-known observations and common beliefs. Some people will not participate to prevent unfavorable responses, and others will test the leader's authority and form cliques.
3. Norming
In this stage the individuals start to become a team. Personal agendas, concerns, and loyalties are minimized. People are discussed less often than the issues, conflicts are resolved constructively, and the team focuses on its real purpose. As trust develops, riskier ideas are proposed and feelings exchanged. The willingness to discuss for the sake of the team grows, which results in better communication and cooperation.
4. Conforming
During this final stage, the team has matured into a cohesive unit. Individual strengths and weaknesses are understood and appreciated, leading to an overall satisfaction with the team membership. As steps are made toward the team's goals, there is individual learning and growth, and people feel satisfied with progress.
There are many variables that affect the length of time a team spends in each of these stages. The team experience of individual members is a big factor, and use of a facilitator can help. Clarity of the team's purpose and the level of management support are other factors. Teams may also get to the norming or conforming stages and fall back to earlier stages if assumptions are found to be incorrect or team membership changes.
Thursday, September 20, 2007
Submit a Prompted Command to Batch:
If we put a question mark before a command name in a CL program and run the program interactively, the system prompts the command, allows filling in the blanks, and then executes the command.
Similarly there is a way to prompt the command, and then send it to batch for execution.
For example, here's the type of code we are running interactively.
? SAVOBJ
MONMSG MSGID(CPF6801) EXEC(RETURN)
The system prompts the Save Object (SAVOBJ) command. If the user presses Enter, the system runs the command. However, if the user presses F3 or F12 to cancel the prompt, the Monitor Message command takes over.
If we want the command to run in batch, we need the QCMDCHK API. It is like the QCMDEXC API most of us are very familiar with in that it accepts the same parameters.
Here's an example that prompts a Save Object command and submits it to batch.
DCL VAR(&CMD) TYPE(*CHAR) LEN(1024)
DCL VAR(&CMDLEN) TYPE(*DEC) LEN(15 5) VALUE(1024)
CHGVAR VAR(&CMD) VALUE('?SAVOBJ')
CALL PGM(QCMDCHK) PARM(&CMD &CMDLEN)
MONMSG MSGID(CPF6801) EXEC(RETURN)
SBMJOB RQSDTA(&CMD)
SNDPGMMSG MSG('Your command was submitted to batch.') +
MSGTYPE(*COMP)
The &CMD variable is initialized to a prompted SAVOBJ command. Calling QCMDCHK causes the system to prompt the command and change the value of &CMD as modified by the user from the prompt screen.
After QCMDCHK runs, the Submit Job (SBMJOB) command is used to start a batch job. Notice that the command is passed through the Request Data (RQSDTA) parameter, not the Command (CMD) parameter.
Similarly there is a way to prompt the command, and then send it to batch for execution.
For example, here's the type of code we are running interactively.
? SAVOBJ
MONMSG MSGID(CPF6801) EXEC(RETURN)
The system prompts the Save Object (SAVOBJ) command. If the user presses Enter, the system runs the command. However, if the user presses F3 or F12 to cancel the prompt, the Monitor Message command takes over.
If we want the command to run in batch, we need the QCMDCHK API. It is like the QCMDEXC API most of us are very familiar with in that it accepts the same parameters.
Here's an example that prompts a Save Object command and submits it to batch.
DCL VAR(&CMD) TYPE(*CHAR) LEN(1024)
DCL VAR(&CMDLEN) TYPE(*DEC) LEN(15 5) VALUE(1024)
CHGVAR VAR(&CMD) VALUE('?SAVOBJ')
CALL PGM(QCMDCHK) PARM(&CMD &CMDLEN)
MONMSG MSGID(CPF6801) EXEC(RETURN)
SBMJOB RQSDTA(&CMD)
SNDPGMMSG MSG('Your command was submitted to batch.') +
MSGTYPE(*COMP)
The &CMD variable is initialized to a prompted SAVOBJ command. Calling QCMDCHK causes the system to prompt the command and change the value of &CMD as modified by the user from the prompt screen.
After QCMDCHK runs, the Submit Job (SBMJOB) command is used to start a batch job. Notice that the command is passed through the Request Data (RQSDTA) parameter, not the Command (CMD) parameter.
Wednesday, September 19, 2007
COTS (Commercial off the Shelf):
There is a trend in the software industry for organizations to move from in-house developed software to commercial off-the-shelf (COTS) software and software developed by contractors. Software developed by contractors who are not part of the organization is referred to as outsourcing organizations. Contractors working in another country are referred to as offshore software developers.
COTS software is normally developed prior to an organization selecting that software for its use. For smaller, less expensive software packages the software is normally “shrink wrapped” and is purchased as-is. As the COTS software becomes larger and more expensive, the contractor may be able to specify modifications to the software.
Differences or challenges faced with COTS software include:
• Task or items missing
• Software fails to perform
• Extra features
• Does not meet business needs
• Does not meet operational needs
• Does not meet people needs
Many organizations select COTS software on evaluation which is a static analysis of the documentation and benefits of the software, versus performing an assessment which the software will be tested in a dynamic mode before use.
The following seven-step process includes those activities which many organizations follow in assuring that the COTS software selected is appropriate for the business needs. Each of the processes is discussed below:
• Assure Completeness of Needs Requirements
• Define Critical Success Factor
• Determine Compatibility with Hardware, Operating System, and other COTS
• Software
• Assure the Software can be Integrated into Your Business System Work Flow
• Demonstrate the Software in Operation
• Evaluate People Fit
• Acceptance Test the Software Process
COTS software is normally developed prior to an organization selecting that software for its use. For smaller, less expensive software packages the software is normally “shrink wrapped” and is purchased as-is. As the COTS software becomes larger and more expensive, the contractor may be able to specify modifications to the software.
Differences or challenges faced with COTS software include:
• Task or items missing
• Software fails to perform
• Extra features
• Does not meet business needs
• Does not meet operational needs
• Does not meet people needs
Many organizations select COTS software on evaluation which is a static analysis of the documentation and benefits of the software, versus performing an assessment which the software will be tested in a dynamic mode before use.
The following seven-step process includes those activities which many organizations follow in assuring that the COTS software selected is appropriate for the business needs. Each of the processes is discussed below:
• Assure Completeness of Needs Requirements
• Define Critical Success Factor
• Determine Compatibility with Hardware, Operating System, and other COTS
• Software
• Assure the Software can be Integrated into Your Business System Work Flow
• Demonstrate the Software in Operation
• Evaluate People Fit
• Acceptance Test the Software Process
Convert Case:
Many times we have faced situation to convert upper case to lower case and vice versa. In RPG we use the XLATE opcode and in RPGLE we use %XLATE for it.
D up C 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
D lo C 'abcdefghijklmnopqrstuvwxyz'
D string S 10A inz('rpg dept')
/FREE
string = %XLATE(lo:up:'rpg dept');
// string now contains 'RPG DEPT'
string = %XLATE(up:lo:'rpg dept':6);
// string now contains 'RPG Dept'
/END-FREE
We can achieve the same functionality in CL by using an API.
The Convert Case (OPM, QLGCNVCS; ILE, QlgConvertCase) API provides a case conversion function that can be directly called by any application program. This API can be used to convert character data to either uppercase or lowercase.
This API supports conversion for single-byte, mixed-byte, and UCS2 (Universal Multiple-Octet Coded Character Set with 16 bits per character) character sets. For the mixed-byte character set data, only the single-byte portion of the data is converted. This API does not convert double-byte character data from any double-byte character set (DBCS) or from a mixed-byte character set.
This API can base case conversion on a CCSID, whereas the Convert Data (QDCXLATE) API uses only table objects.
Required Parameter Group:
1 Request control block Input Char(*)
2 Input data Input Char(*)
3 Output data Output Char(*)
4 Length of data Input Binary(4)
5 Error code I/O Char(*)
D up C 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
D lo C 'abcdefghijklmnopqrstuvwxyz'
D string S 10A inz('rpg dept')
/FREE
string = %XLATE(lo:up:'rpg dept');
// string now contains 'RPG DEPT'
string = %XLATE(up:lo:'rpg dept':6);
// string now contains 'RPG Dept'
/END-FREE
We can achieve the same functionality in CL by using an API.
The Convert Case (OPM, QLGCNVCS; ILE, QlgConvertCase) API provides a case conversion function that can be directly called by any application program. This API can be used to convert character data to either uppercase or lowercase.
This API supports conversion for single-byte, mixed-byte, and UCS2 (Universal Multiple-Octet Coded Character Set with 16 bits per character) character sets. For the mixed-byte character set data, only the single-byte portion of the data is converted. This API does not convert double-byte character data from any double-byte character set (DBCS) or from a mixed-byte character set.
This API can base case conversion on a CCSID, whereas the Convert Data (QDCXLATE) API uses only table objects.
Required Parameter Group:
1 Request control block Input Char(*)
2 Input data Input Char(*)
3 Output data Output Char(*)
4 Length of data Input Binary(4)
5 Error code I/O Char(*)
Tuesday, September 18, 2007
Just-In-Time:
Just-in-time (JIT) is a revolutionary production system developed by Taiichi Ohno, a Toyota vice-president. He examined and challenged a known manufacturing principle, and developed a disciplined system that placed Toyota a quantum step ahead of their rivals in the Western countries. This system, now known as, “the Toyota production system,” has set the standard for world-class manufacturing.
The ultimate goal of JIT production is to supply each process with exactly the required items, in exactly the required quantity, at exactly the required time. There are two conditions necessary to reach this situation: large amounts of production flexibility, and very short lead times.
The basic difference between the old method of supply and the new system is that the concept of a one-process department is eliminated. The same work tasks are no longer all performed in the same work area. These highly specialized departments are replaced with mixed lines of processing capabilities laid out in the sequence required to make the part or groups of parts. Parts having similar size, shape, material, and processing sequence are allocated to those lines by a system known as “group technology.” Parts are processed over these lines one at a time in very small batches.
Instead of producing work in one area and pushing them or giving them to the next operation, the goods stay with the producing department until the next step in the process comes to the preceding operation, and takes only what is needed. The traditional method of a “push” system, in which the work is pushed through the operation from beginning to end, is changed to a “pull” system, in which data is only moved forward when it is needed by the next operation.
Just-in-time principles can be used in IT in the following ways:
1. Systems development and maintenance tasks become driven when the user of an internal or external product or service needs them. Programs would not be developed before they are needed for test or production.
2. Systems analysts and programmers would not be given information and documents to store until they need them.
3. Internal information processes would be designed so individuals can move from job to job with minimal delay. For example, programmers should be able to stop working on one program and start another within the JIT ten-minute turnover standard.
The ultimate goal of JIT production is to supply each process with exactly the required items, in exactly the required quantity, at exactly the required time. There are two conditions necessary to reach this situation: large amounts of production flexibility, and very short lead times.
The basic difference between the old method of supply and the new system is that the concept of a one-process department is eliminated. The same work tasks are no longer all performed in the same work area. These highly specialized departments are replaced with mixed lines of processing capabilities laid out in the sequence required to make the part or groups of parts. Parts having similar size, shape, material, and processing sequence are allocated to those lines by a system known as “group technology.” Parts are processed over these lines one at a time in very small batches.
Instead of producing work in one area and pushing them or giving them to the next operation, the goods stay with the producing department until the next step in the process comes to the preceding operation, and takes only what is needed. The traditional method of a “push” system, in which the work is pushed through the operation from beginning to end, is changed to a “pull” system, in which data is only moved forward when it is needed by the next operation.
Just-in-time principles can be used in IT in the following ways:
1. Systems development and maintenance tasks become driven when the user of an internal or external product or service needs them. Programs would not be developed before they are needed for test or production.
2. Systems analysts and programmers would not be given information and documents to store until they need them.
3. Internal information processes would be designed so individuals can move from job to job with minimal delay. For example, programmers should be able to stop working on one program and start another within the JIT ten-minute turnover standard.
Monday, September 17, 2007
Retrieve Call Stack:
The Retrieve Call Stack (QWVRCSTK) API returns the call stack information for the specified thread. The first call stack entry returned corresponds to the most recent call in the thread.
Required Parameter Group:
1 Receiver variable Output Char(*)
2 Length of receiver variable Input Binary(4)
3 Format of receiver information Input Char(8)
4 Job identification information Input Char(*)
5 Format of job identification information Input Char(8)
6 Error code I/O Char(*)
Default Public Authority: *USE
Threadsafe: Yes
A sample program to demonstrate this API:
D GetCaller PR Extpgm('QWVRCSTK')
D 2000
D 10I 0
D 8 CONST
D 56
D 8 CONST
D 15
D Var DS 2000
D BytAvl 10I 0
D BytRtn 10I 0
D Entries 10I 0
D Offset 10I 0
D EntryCount 10I 0
D VarLen S 10I 0 Inz(%size(Var))
D ApiErr S 15
D JobIdInf DS
D JIDQName 26 Inz('*')
D JIDIntID 16
D JIDRes3 2 Inz(*loval)
D JIDThreadInd 10I 0 Inz(1)
D JIDThread 8 Inz(*loval)
D Entry DS 256
D EntryLen 10I 0
D PgmNam 10 Overlay(Entry:25)
D PgmLib 10 Overlay(Entry:35)
D
* Call the API to retrieve the Call Stack Info ....
C CallP GetCaller(Var:VarLen:'CSTK0100':JobIdInf
C :'JIDF0100':ApiErr)
* It returns the call levels ...
C Do EntryCount
C Eval Entry = %subst(Var:Offset + 1)
C Eval Offset = Offset + EntryLen
C EndDo
C Eval *InLR = '1'
Required Parameter Group:
1 Receiver variable Output Char(*)
2 Length of receiver variable Input Binary(4)
3 Format of receiver information Input Char(8)
4 Job identification information Input Char(*)
5 Format of job identification information Input Char(8)
6 Error code I/O Char(*)
Default Public Authority: *USE
Threadsafe: Yes
A sample program to demonstrate this API:
D GetCaller PR Extpgm('QWVRCSTK')
D 2000
D 10I 0
D 8 CONST
D 56
D 8 CONST
D 15
D Var DS 2000
D BytAvl 10I 0
D BytRtn 10I 0
D Entries 10I 0
D Offset 10I 0
D EntryCount 10I 0
D VarLen S 10I 0 Inz(%size(Var))
D ApiErr S 15
D JobIdInf DS
D JIDQName 26 Inz('*')
D JIDIntID 16
D JIDRes3 2 Inz(*loval)
D JIDThreadInd 10I 0 Inz(1)
D JIDThread 8 Inz(*loval)
D Entry DS 256
D EntryLen 10I 0
D PgmNam 10 Overlay(Entry:25)
D PgmLib 10 Overlay(Entry:35)
D
* Call the API to retrieve the Call Stack Info ....
C CallP GetCaller(Var:VarLen:'CSTK0100':JobIdInf
C :'JIDF0100':ApiErr)
* It returns the call levels ...
C Do EntryCount
C Eval Entry = %subst(Var:Offset + 1)
C Eval Offset = Offset + EntryLen
C EndDo
C Eval *InLR = '1'
Friday, September 14, 2007
Pareto Chart:
A Pareto chart is used to graphically summarize and display the relative importance of the differences between groups of data. A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or money), and are arranged with longest bars on the left and the shortest to the right. In this way the chart visually depicts which situations are more significant.
When to use:
• When analyzing data about the frequency of problems or causes in a process.
• When there are many problems or causes and you want to focus on the most significant.
• When analyzing broad causes by looking at their specific components.
• When communicating with others about your data.
Procedure to use:
1. Decide what categories you will use to group items.
2. Decide what measurement is appropriate. Common measurements are frequency, quantity, cost and time.
3. Decide what period of time the chart will cover: One work cycle? One full day? A week?
4. Collect the data, recording the category each time. (Or assemble data that already exist.)
5. Subtotal the measurements for each category.
6. Determine the appropriate scale for the measurements you have collected. The maximum value will be the largest subtotal from step 5. (If you will do optional steps 8 and 9 below, the maximum value will be the sum of all subtotals from step 5.) Mark the scale on the left side of the chart.
7. Construct and label bars for each category. Place the tallest at the far left, then the next tallest to its right and so on. If there are many categories with small measurements, they can be grouped as “other.”
Steps 8 and 9 are optional but are useful for analysis and communication.
8. Calculate the percentage for each category: the subtotal for that category divided by the total for all categories. Draw a right vertical axis and label it with percentages. Be sure the two scales match: For example, the left measurement that corresponds to one-half should be exactly opposite 50% on the right scale.
9. Calculate and draw cumulative sums: Add the subtotals for the first and second categories, and place a dot above the second bar indicating that sum. To that sum add the subtotal for the third category, and place a dot above the third bar for that new sum. Continue the process for all the bars. Connect the dots, starting at the top of the first bar. The last dot should reach 100 percent on the right scale.
Example:
Figure shows how many customer complaints were received regarding the product document.
If all complaints cause equal distress to the customer, working on eliminating document-related complaints would have the most impact, and of those, working on quality certificates should be most fruitful.
Thursday, September 13, 2007
Using EditC and EditW in CL:
Suppose a CL program is to send a message to a user indicating the number of new orders that were entered into the database during a batch run. You might use some code like this:
DCL VAR(&CUSTORDERS) TYPE(*DEC) LEN(10 0)
DCL VAR(&ALPHANUM) TYPE(*CHAR) LEN(10)
RTVMBRD FILE(ORDERS) NBRCURRCD(&CUSTORDERS)
CHGVAR VAR(&ALPHANUM) VALUE(&CUSTORDERS)
SNDMSG MSG(&ALPHANUM *BCAT 'orders were added to +
the database.') TOUSR(Somebody)
The user would get a message like this:
0000000420 orders were added to the database.
The message is accurate, but the leading zeros make it less readable than it could be.
CL has no editing capabilities, but IBM has written three APIs to edit numbers. They yield the same results you get from edit codes and edit words in RPG and DDS. They have lots of parameters, but they are not hard to use.
The following solution applies the 1 (one) edit code to the number of orders that were added to the database.
DCL VAR(&CUSTORDERS) TYPE(*DEC) LEN(10 0)
DCL VAR(&ALPHANUM) TYPE(*CHAR) LEN(13)
DCL VAR(&EDTMASK) TYPE(*CHAR) LEN(256)
DCL VAR(&EDTMASKLEN) TYPE(*CHAR) LEN(4)
DCL VAR(&RCVVARLEN) TYPE(*CHAR) LEN(4)
DCL VAR(&ZROBAL) TYPE(*CHAR) LEN(1)
DCL VAR(&EDTCODE) TYPE(*CHAR) LEN(1) VALUE('1')
DCL VAR(&CURRENCY) TYPE(*CHAR) LEN(1)
DCL VAR(&SRCVARPCSN) TYPE(*CHAR) LEN(4)
DCL VAR(&SRCVARDEC) TYPE(*CHAR) LEN(4)
DCL VAR(&ERRORDATA) TYPE(*CHAR) LEN(16) +
VALUE(X'0000000000000000')
RTVMBRD FILE(ORDERS) NBRCURRCD(&CUSTORDERS)
CHGVAR VAR(%BIN(&SRCVARPCSN)) VALUE(10)
CHGVAR VAR(%BIN(&SRCVARDEC)) VALUE(0)
CALL PGM(QECCVTEC) PARM(&EDTMASK &EDTMASKLEN +
&RCVVARLEN &ZROBAL &EDTCODE &CURRENCY +
&SRCVARPCSN &SRCVARDEC &ERRORDATA)
CALL PGM(QECEDT) PARM(&ALPHANUM &RCVVARLEN +
&CUSTORDERS *PACKED &SRCVARPCSN &EDTMASK +
&EDTMASKLEN &ZROBAL &ERRORDATA)
SNDMSG MSG(&ALPHANUM *BCAT 'orders were added to +
the database.') TOUSR(Somebody)
QECCVTEC creates an editing mask for a field of a certain size that is to be edited with a certain edit code. This task need be done only once for a variable. QECEDT applies the edit mask to the numeric variable. This task could be done repeatedly, within a loop, for instance, as necessary. The message now looks like this:
420 orders were added to the database.
A simple loop made a big difference in a message someone will look at every day, especially if that someone tends to confuse zeros and eights.
DCL VAR(&CUSTORDERS) TYPE(*DEC) LEN(10 0)
DCL VAR(&ALPHANUM) TYPE(*CHAR) LEN(10)
RTVMBRD FILE(ORDERS) NBRCURRCD(&CUSTORDERS)
CHGVAR VAR(&ALPHANUM) VALUE(&CUSTORDERS)
SNDMSG MSG(&ALPHANUM *BCAT 'orders were added to +
the database.') TOUSR(Somebody)
The user would get a message like this:
0000000420 orders were added to the database.
The message is accurate, but the leading zeros make it less readable than it could be.
CL has no editing capabilities, but IBM has written three APIs to edit numbers. They yield the same results you get from edit codes and edit words in RPG and DDS. They have lots of parameters, but they are not hard to use.
The following solution applies the 1 (one) edit code to the number of orders that were added to the database.
DCL VAR(&CUSTORDERS) TYPE(*DEC) LEN(10 0)
DCL VAR(&ALPHANUM) TYPE(*CHAR) LEN(13)
DCL VAR(&EDTMASK) TYPE(*CHAR) LEN(256)
DCL VAR(&EDTMASKLEN) TYPE(*CHAR) LEN(4)
DCL VAR(&RCVVARLEN) TYPE(*CHAR) LEN(4)
DCL VAR(&ZROBAL) TYPE(*CHAR) LEN(1)
DCL VAR(&EDTCODE) TYPE(*CHAR) LEN(1) VALUE('1')
DCL VAR(&CURRENCY) TYPE(*CHAR) LEN(1)
DCL VAR(&SRCVARPCSN) TYPE(*CHAR) LEN(4)
DCL VAR(&SRCVARDEC) TYPE(*CHAR) LEN(4)
DCL VAR(&ERRORDATA) TYPE(*CHAR) LEN(16) +
VALUE(X'0000000000000000')
RTVMBRD FILE(ORDERS) NBRCURRCD(&CUSTORDERS)
CHGVAR VAR(%BIN(&SRCVARPCSN)) VALUE(10)
CHGVAR VAR(%BIN(&SRCVARDEC)) VALUE(0)
CALL PGM(QECCVTEC) PARM(&EDTMASK &EDTMASKLEN +
&RCVVARLEN &ZROBAL &EDTCODE &CURRENCY +
&SRCVARPCSN &SRCVARDEC &ERRORDATA)
CALL PGM(QECEDT) PARM(&ALPHANUM &RCVVARLEN +
&CUSTORDERS *PACKED &SRCVARPCSN &EDTMASK +
&EDTMASKLEN &ZROBAL &ERRORDATA)
SNDMSG MSG(&ALPHANUM *BCAT 'orders were added to +
the database.') TOUSR(Somebody)
QECCVTEC creates an editing mask for a field of a certain size that is to be edited with a certain edit code. This task need be done only once for a variable. QECEDT applies the edit mask to the numeric variable. This task could be done repeatedly, within a loop, for instance, as necessary. The message now looks like this:
420 orders were added to the database.
A simple loop made a big difference in a message someone will look at every day, especially if that someone tends to confuse zeros and eights.
Wednesday, September 12, 2007
Form W-2, Wage and Tax Statement:
Form W-2, Wage and Tax Statement, is used in the United States income tax system as an information return to report wages paid to employees and the taxes withheld from them. The form is also used to report FICA taxes to the Social Security Administration. Relevant amounts on Form W-2 are reported by the Social Security Administration to the Internal Revenue Service.
Employers must complete a Form W-2 for each employee to whom they pay a salary, wage, or other compensation as part of the employment relationship. The Form W-2 reports income on a calendar year (January 1 through December 31) basis, regardless of the fiscal year used by the employer or employee for other Federal tax purposes.
The form consists of six copies:
• Copy A - Submitted by the employer to the Social Security Administration. (In addition, the employer must also submit Form W-3, which is a summary of all Forms W-2 completed, along with all Copies A submitted. The Form W-3 must be signed by the employer.)
• Copy B - To be sent to the employee and filed by the employee with the employee's federal income tax returns.
• Copy C - To be sent to the employee, to be retained by the employee for the employee's records.
• Copy D - To be retained by the employer, for the employer's records.
• Copy 1 - To be filed with the employee's state or local income tax returns (if any).
• Copy 2 - To be filed with the employee's state or local income tax returns (if any).
Employers are instructed to send copies B, C, 1, and 2 to their employees generally by January 31 of the year immediately following the year of income to which the Form W-2 relates, which gives these taxpayers about 2 1/2 months before the April 15 income tax due date. The Form W-2, with Form W-3, generally must be filed by the employer with the Social Security Administration by the end of February.
Employers must complete a Form W-2 for each employee to whom they pay a salary, wage, or other compensation as part of the employment relationship. The Form W-2 reports income on a calendar year (January 1 through December 31) basis, regardless of the fiscal year used by the employer or employee for other Federal tax purposes.
The form consists of six copies:
• Copy A - Submitted by the employer to the Social Security Administration. (In addition, the employer must also submit Form W-3, which is a summary of all Forms W-2 completed, along with all Copies A submitted. The Form W-3 must be signed by the employer.)
• Copy B - To be sent to the employee and filed by the employee with the employee's federal income tax returns.
• Copy C - To be sent to the employee, to be retained by the employee for the employee's records.
• Copy D - To be retained by the employer, for the employer's records.
• Copy 1 - To be filed with the employee's state or local income tax returns (if any).
• Copy 2 - To be filed with the employee's state or local income tax returns (if any).
Employers are instructed to send copies B, C, 1, and 2 to their employees generally by January 31 of the year immediately following the year of income to which the Form W-2 relates, which gives these taxpayers about 2 1/2 months before the April 15 income tax due date. The Form W-2, with Form W-3, generally must be filed by the employer with the Social Security Administration by the end of February.
Monday, September 10, 2007
Types of Testing:
Testing assures that the end product (system) meets requirements and expectations under defined operating conditions. Within an IT environment, the end product is typically executable code.
Various types of Testing are,
White-Box
White-box testing (logic driven) assumes that the path of logic in a unit or program is known. White-box testing consists of testing paths, branch by branch, to produce predictable results.
Black-Box
In black-box testing (data or condition driven), the focus is on evaluating the function of a program or application against its currently approved specifications. Specifically, this technique determines whether combinations of inputs and operations produce expected results. As a result, the initial conditions and input data are critical for black-box test cases.
Incremental
Incremental testing is a disciplined method of testing the interfaces between unit-tested programs and between system components. It involves adding unit-tested programs to a given module or component one by one, and testing each resultant combination. The approach can be Top-Down or Bottom-Up.
Thread
This test technique, which is often used during early integration testing, demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application. Thread testing and incremental testing are usually used together. For example, units can undergo incremental testing until enough units are integrated and a single business function can be performed, threading through the integrated components.
Regression
There are always risks associated with introducing change to an application. To reduce this risk, regression testing should be conducted during all stages of testing after a functional change, reduction, improvement, or repair has been made. This technique assures that the change will not cause adverse effects on parts of the application or system that were not supposed to change.
Various types of Testing are,
White-Box
White-box testing (logic driven) assumes that the path of logic in a unit or program is known. White-box testing consists of testing paths, branch by branch, to produce predictable results.
Black-Box
In black-box testing (data or condition driven), the focus is on evaluating the function of a program or application against its currently approved specifications. Specifically, this technique determines whether combinations of inputs and operations produce expected results. As a result, the initial conditions and input data are critical for black-box test cases.
Incremental
Incremental testing is a disciplined method of testing the interfaces between unit-tested programs and between system components. It involves adding unit-tested programs to a given module or component one by one, and testing each resultant combination. The approach can be Top-Down or Bottom-Up.
Thread
This test technique, which is often used during early integration testing, demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application. Thread testing and incremental testing are usually used together. For example, units can undergo incremental testing until enough units are integrated and a single business function can be performed, threading through the integrated components.
Regression
There are always risks associated with introducing change to an application. To reduce this risk, regression testing should be conducted during all stages of testing after a functional change, reduction, improvement, or repair has been made. This technique assures that the change will not cause adverse effects on parts of the application or system that were not supposed to change.
Six Sigma:
Motorola developed a concept called “Six Sigma Quality” that focuses on defect rates, as opposed to percent performed correctly. “Sigma” is a statistical term meaning one standard deviation. “Six Sigma” means six standard deviations. At the Six Sigma statistical level, only 3.4 items per million are outside of the acceptable level. Thus, the Six Sigma quality level means that out of every one million items counted 999,996.6 will be correct, and no more than 3.4 will be defective.
Experience has shown that in most systems, a Four Sigma quality level is the norm. At the Four Sigma level there are 6,120 defects per million parts, or about 6 defects per 1,000 opportunities, to do a task correctly.
Six Sigma asserts the following:
• Continuous efforts to reduce variation in process outputs is key to business success
• Manufacturing and business processes can be measured, analyzed, improved and controlled
• Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management
Six Sigma has two key methodologies: DMAIC and DMADV. DMAIC is used to improve an existing business process, and DMADV is used to create new product or process designs for predictable, defect-free performance.
DMAIC
Basic methodology consists of the following five steps:
• Define the process improvement goals that are consistent with customer demands and enterprise strategy.
• Measure the current process and collect relevant data for future comparison.
• Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all factors have been considered.
• Improve or optimize the process based upon the analysis using techniques like Design of Experiments.
• Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms.
DMADV
Basic methodology consists of the following five steps:
• Define the goals of the design activity that are consistent with customer demands and enterprise strategy.
• Measure and identify CTQs (critical to qualities), product capabilities, production process capability, and risk assessments.
• Analyze to develop and design alternatives, create high-level design and evaluate design capability to select the best design.
• Design details, optimize the design, and plan for design verification. This phase may require simulations.
• Verify the design, set up pilot runs, implement production process and handover to process owners.
Experience has shown that in most systems, a Four Sigma quality level is the norm. At the Four Sigma level there are 6,120 defects per million parts, or about 6 defects per 1,000 opportunities, to do a task correctly.
Six Sigma asserts the following:
• Continuous efforts to reduce variation in process outputs is key to business success
• Manufacturing and business processes can be measured, analyzed, improved and controlled
• Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management
Six Sigma has two key methodologies: DMAIC and DMADV. DMAIC is used to improve an existing business process, and DMADV is used to create new product or process designs for predictable, defect-free performance.
DMAIC
Basic methodology consists of the following five steps:
• Define the process improvement goals that are consistent with customer demands and enterprise strategy.
• Measure the current process and collect relevant data for future comparison.
• Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all factors have been considered.
• Improve or optimize the process based upon the analysis using techniques like Design of Experiments.
• Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms.
DMADV
Basic methodology consists of the following five steps:
• Define the goals of the design activity that are consistent with customer demands and enterprise strategy.
• Measure and identify CTQs (critical to qualities), product capabilities, production process capability, and risk assessments.
• Analyze to develop and design alternatives, create high-level design and evaluate design capability to select the best design.
• Design details, optimize the design, and plan for design verification. This phase may require simulations.
• Verify the design, set up pilot runs, implement production process and handover to process owners.
Delete Duplicates:
Many times we come across a situation were we need to remove the duplicate records from the file which got inserted due to some problem with the keys or application.
The easy way to remove them is to use SQL for the same. Here is a SQL that helps in deleting the duplicate records from the file.
For example, consider a file FILEA in library LIBA and consider its key field to be KEYA. The query to delete the duplicate key records is as follows.
DELETE FROM LIBA/FILEA F1 WHERE RRN(F1) > (SELECT MIN(RRN(F2)) FROM FILEA F2 WHERE F2.KEYA=F1.KEYA)
The easy way to remove them is to use SQL for the same. Here is a SQL that helps in deleting the duplicate records from the file.
For example, consider a file FILEA in library LIBA and consider its key field to be KEYA. The query to delete the duplicate key records is as follows.
DELETE FROM LIBA/FILEA F1 WHERE RRN(F1) > (SELECT MIN(RRN(F2)) FROM FILEA F2 WHERE F2.KEYA=F1.KEYA)
More about SSN:
SSN is the widely accepted acronym that stands for Social Security Number. A SSN is a 9-digit, personal identification number issued to U.S. citizens and temporary (working) residents by the United States Social Security Administration.
A social security number is required to apply for a job, receive any government assistance, file taxes, and obtain a mortgage or credit. For all of these reasons, a SSN is also one of the most private pieces of personal information an individual uses, and should be kept private.
Three different types of Social Security cards are issued. The most common type contains the cardholder's name and number. Such cards are issued to U.S. citizens and U.S. permanent residents. There are also two restricted types of Social Security cards:
• One reads "NOT VALID FOR EMPLOYMENT." Such cards cannot be used as proof of work authorization, and are not acceptable as a List C document on the I-9 form.
• The other reads "VALID FOR WORK ONLY WITH DHS AUTHORIZATION." These cards are issued to people who have temporary work authorization in the U.S. They can satisfy the I-9 requirement, if they are accompanied by a work authorization card.
The Social Security number is a nine-digit number in the format "123-45-6789". The number is divided into three parts.
• The Area Number, the first three digits, is assigned by the geographical region. The Area Number represented the office code in which the card was issued. This did not necessarily have to be in the area where the applicant lived, since a person could apply for their card in any Social Security office.
• The middle two digits are the group number. They have no special geographic or data significance but merely serve to break the number into conveniently sized blocks for orderly issuance.
The group numbers range from 01 to 99. However, they are not assigned in consecutive order. For administrative reasons, group numbers are issued in the following order:
1. ODD numbers from 01 through 09
2. EVEN numbers from 10 through 98
3. EVEN numbers from 02 through 08
4. ODD numbers from 11 through 99
As an example, group number 98 will be issued before 11.
• The last four digits are serial numbers. They represent a straight numerical sequence of digits from 0001-9999 within the group.
Currently, a valid SSN cannot have an area number above 772, the highest area number which the Social Security Administration has allocated.
There are also special numbers which will never be allocated:
• Numbers with all zeros in any digit group (000-xx-xxxx, xxx-00-xxxx, xxx-xx-0000).
• Numbers of the form 666-xx-xxxx, probably due to the potential controversy. Though the omission of this area number is not acknowledged by the SSA, it remains unassigned.
• Numbers from 987-65-4320 to 987-65-4329 are reserved for use in advertisements.
A social security number is required to apply for a job, receive any government assistance, file taxes, and obtain a mortgage or credit. For all of these reasons, a SSN is also one of the most private pieces of personal information an individual uses, and should be kept private.
Three different types of Social Security cards are issued. The most common type contains the cardholder's name and number. Such cards are issued to U.S. citizens and U.S. permanent residents. There are also two restricted types of Social Security cards:
• One reads "NOT VALID FOR EMPLOYMENT." Such cards cannot be used as proof of work authorization, and are not acceptable as a List C document on the I-9 form.
• The other reads "VALID FOR WORK ONLY WITH DHS AUTHORIZATION." These cards are issued to people who have temporary work authorization in the U.S. They can satisfy the I-9 requirement, if they are accompanied by a work authorization card.
The Social Security number is a nine-digit number in the format "123-45-6789". The number is divided into three parts.
• The Area Number, the first three digits, is assigned by the geographical region. The Area Number represented the office code in which the card was issued. This did not necessarily have to be in the area where the applicant lived, since a person could apply for their card in any Social Security office.
• The middle two digits are the group number. They have no special geographic or data significance but merely serve to break the number into conveniently sized blocks for orderly issuance.
The group numbers range from 01 to 99. However, they are not assigned in consecutive order. For administrative reasons, group numbers are issued in the following order:
1. ODD numbers from 01 through 09
2. EVEN numbers from 10 through 98
3. EVEN numbers from 02 through 08
4. ODD numbers from 11 through 99
As an example, group number 98 will be issued before 11.
• The last four digits are serial numbers. They represent a straight numerical sequence of digits from 0001-9999 within the group.
Currently, a valid SSN cannot have an area number above 772, the highest area number which the Social Security Administration has allocated.
There are also special numbers which will never be allocated:
• Numbers with all zeros in any digit group (000-xx-xxxx, xxx-00-xxxx, xxx-xx-0000).
• Numbers of the form 666-xx-xxxx, probably due to the potential controversy. Though the omission of this area number is not acknowledged by the SSA, it remains unassigned.
• Numbers from 987-65-4320 to 987-65-4329 are reserved for use in advertisements.
Wild Card Selection in OPNQRYF:
The %WLDCRD function lets you select any records that match your selection values, in which the underline (_) will match any single character value. The two underline characters in Example allow any day in the month of March to be selected.
OPNQRYF FILE(FILEA) +
QRYSLT('%DIGITS(DATE) *EQ %WLDCRD("03__2005")')
The wildcard function is not supported for DATE, TIME, or TIMESTAMP data types. The %WLDCRD function also allows you to name the wild card character (underline is the default).
The wild card function supports two different forms:
• A fixed-position wild card as shown in the previous example in which the underline matches any single character as in the following example:
QRYSLT('FLDA *EQ %WLDCRD("A_C")')
This compares successfully to ABC, ACC, ADC, AxC, and so on. In this example, the field being analyzed only compares correctly if it is exactly 3 characters in length. If the field is longer than 3 characters, you also need the second form of wild card support.
• A variable-position wild card will match any zero or more characters. The Open Query File (OPNQRYF) command uses an asterisk (*) for this type of wild card variable character or you can specify your own character. An asterisk is used in the following example:
QRYSLT('FLDB *EQ %WLDCRD("A*C*") ')
This compares successfully to AC, ABC, AxC, ABCD, AxxxxxxxC, and so on. The asterisk causes the command to ignore any intervening characters if they exist. Notice that in this example the asterisk is specified both before and after the character or characters that can appear later in the field. If the asterisk were omitted from the end of the search argument, it causes a selection only if the field ends with the character C.
You must specify an asterisk at the start of the wild card string if you want to select records where the remainder of the pattern starts anywhere in the field. Similarly, the pattern string must end with an asterisk if you want to select records where the remainder of the pattern ends anywhere in the field.
For example, you can specify:
QRYSLT('FLDB *EQ %WLDCRD("*ABC*DEF*") ')
You get a match on ABCDEF, ABCxDEF, ABCxDEFx, ABCxxxxxxDEF, ABCxxxDEFxxx, xABCDEF, xABCxDEFx, and so on.
You can combine the two wildcard functions as in the following example:
QRYSLT('FLDB *EQ %WLDCRD("ABC_*DEF*") ')
You get a match on ABCxDEF, ABCxxxxxxDEF, ABCxxxDEFxxx, and so on. The underline forces at least one character to appear between the ABC and DEF (for example, ABCDEF would not match).
OPNQRYF FILE(FILEA) +
QRYSLT('%DIGITS(DATE) *EQ %WLDCRD("03__2005")')
The wildcard function is not supported for DATE, TIME, or TIMESTAMP data types. The %WLDCRD function also allows you to name the wild card character (underline is the default).
The wild card function supports two different forms:
• A fixed-position wild card as shown in the previous example in which the underline matches any single character as in the following example:
QRYSLT('FLDA *EQ %WLDCRD("A_C")')
This compares successfully to ABC, ACC, ADC, AxC, and so on. In this example, the field being analyzed only compares correctly if it is exactly 3 characters in length. If the field is longer than 3 characters, you also need the second form of wild card support.
• A variable-position wild card will match any zero or more characters. The Open Query File (OPNQRYF) command uses an asterisk (*) for this type of wild card variable character or you can specify your own character. An asterisk is used in the following example:
QRYSLT('FLDB *EQ %WLDCRD("A*C*") ')
This compares successfully to AC, ABC, AxC, ABCD, AxxxxxxxC, and so on. The asterisk causes the command to ignore any intervening characters if they exist. Notice that in this example the asterisk is specified both before and after the character or characters that can appear later in the field. If the asterisk were omitted from the end of the search argument, it causes a selection only if the field ends with the character C.
You must specify an asterisk at the start of the wild card string if you want to select records where the remainder of the pattern starts anywhere in the field. Similarly, the pattern string must end with an asterisk if you want to select records where the remainder of the pattern ends anywhere in the field.
For example, you can specify:
QRYSLT('FLDB *EQ %WLDCRD("*ABC*DEF*") ')
You get a match on ABCDEF, ABCxDEF, ABCxDEFx, ABCxxxxxxDEF, ABCxxxDEFxxx, xABCDEF, xABCxDEFx, and so on.
You can combine the two wildcard functions as in the following example:
QRYSLT('FLDB *EQ %WLDCRD("ABC_*DEF*") ')
You get a match on ABCxDEF, ABCxxxxxxDEF, ABCxxxDEFxxx, and so on. The underline forces at least one character to appear between the ABC and DEF (for example, ABCDEF would not match).
Edit Spool file with SEU:
Please follow the steps to edit the Spool file with SEU
1. Create a program-described non-source physical file member. The record length should be one byte longer than the report. That is, for a 132-column report, create a physical file with 133-byte records.
CRTPF QTEMP/TEMP RCDLEN(133)
2. Use the Copy Spooled File (CPYSPLF) command to place the report in the physical file member you just created. Specify that you want to use first-character forms control in order to prefix each record with skipping and spacing information.
CPYSPLF FILE(QPRTLIBL) TOFILE(QTEMP/TEMP) +
JOB(*) CTLCHAR(*FCFC)
3. Unless you already have one, create a source physical file in which to edit the report. The record length should be 12 bytes more than the physical file you created in step 1.
CRTSRCPF FILE(QTEMP/TEMPSRC) RCDLEN(146) MBR(TEMPSRC)
4. Copy the non-source physical file member to the source physical file member, specifying FMTOPT(*CVTSRC).
CPYF FROMFILE(QTEMP/TEMP) TOFILE(QTEMP/TEMPSRC) +
MBROPT(*REPLACE) FMTOPT(*CVTSRC)
5. Use SEU to edit the report. Remember that the first column is reserved for the forms control characters.
STRSEU SRCFILE(QTEMP/TEMPSRC) SRCMBR(TEMPSRC)
6. Now reverse the process. First copy the source member to the non-source member.
CPYF FROMFILE(QTEMP/TEMPSRC) TOFILE(QTEMP/TEMP) +
MBROPT(*REPLACE) FMTOPT(*CVTSRC)
7. Build a new report by copying the non-source member to a program-described printer file, such as QSYSPRT. You'll need an override to make CPYF interpret the forms control characters.
OVRPRTF FILE(QSYSPRT) CTLCHAR(*FCFC)
CPYF FROMFILE(QTEMP/TEMP) TOFILE(QSYSPRT)
1. Create a program-described non-source physical file member. The record length should be one byte longer than the report. That is, for a 132-column report, create a physical file with 133-byte records.
CRTPF QTEMP/TEMP RCDLEN(133)
2. Use the Copy Spooled File (CPYSPLF) command to place the report in the physical file member you just created. Specify that you want to use first-character forms control in order to prefix each record with skipping and spacing information.
CPYSPLF FILE(QPRTLIBL) TOFILE(QTEMP/TEMP) +
JOB(*) CTLCHAR(*FCFC)
3. Unless you already have one, create a source physical file in which to edit the report. The record length should be 12 bytes more than the physical file you created in step 1.
CRTSRCPF FILE(QTEMP/TEMPSRC) RCDLEN(146) MBR(TEMPSRC)
4. Copy the non-source physical file member to the source physical file member, specifying FMTOPT(*CVTSRC).
CPYF FROMFILE(QTEMP/TEMP) TOFILE(QTEMP/TEMPSRC) +
MBROPT(*REPLACE) FMTOPT(*CVTSRC)
5. Use SEU to edit the report. Remember that the first column is reserved for the forms control characters.
STRSEU SRCFILE(QTEMP/TEMPSRC) SRCMBR(TEMPSRC)
6. Now reverse the process. First copy the source member to the non-source member.
CPYF FROMFILE(QTEMP/TEMPSRC) TOFILE(QTEMP/TEMP) +
MBROPT(*REPLACE) FMTOPT(*CVTSRC)
7. Build a new report by copying the non-source member to a program-described printer file, such as QSYSPRT. You'll need an override to make CPYF interpret the forms control characters.
OVRPRTF FILE(QSYSPRT) CTLCHAR(*FCFC)
CPYF FROMFILE(QTEMP/TEMP) TOFILE(QSYSPRT)
Extreme Programming:
Extreme Programming (or XP) is a software engineering methodology.
Extreme Programming is
An attempt to reconcile humanity and productivity
A mechanism for social change
A path to improvement
A style of development
A software development discipline
The main aim of XP is to reduce the cost of change. In traditional system development methods the requirements for the system are determined at the beginning of the development project and often fixed from that point on. This means that the cost of changing the requirements at a later stage will be high.
XP sets out to reduce the cost of change by introducing basic values, principles and practices. By applying XP, a system development project should be more flexible with respect to changes.
XP values
Extreme Programming concentrates on five values.
Communication
Simplicity
Feedback
Courage
Respect
XP gives all developers a shared view of the system which matches the view held by the users rather through documentation in the traditional method. To this end, Extreme Programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback.
XP encourages starting with the Simplest solution. XP focuses on designing and coding for the needs of today instead of those of future. XP is not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed.
Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. Team can easily give the feedback to the user about new requirement due to simple system.
Several practices embody Courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary.
The Respect Value manifests in several ways. In Extreme Programming, team members respect each other because programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers.
Adopting four earlier values led to respect gained from others in team. Nobody on the team should feel unappreciated or ignored. This ensures high level of motivation and encourages loyalty toward the team, and the goal of the project. This value is very dependent upon the other values, and is very much oriented toward people in a team.
Extreme Programming is
An attempt to reconcile humanity and productivity
A mechanism for social change
A path to improvement
A style of development
A software development discipline
The main aim of XP is to reduce the cost of change. In traditional system development methods the requirements for the system are determined at the beginning of the development project and often fixed from that point on. This means that the cost of changing the requirements at a later stage will be high.
XP sets out to reduce the cost of change by introducing basic values, principles and practices. By applying XP, a system development project should be more flexible with respect to changes.
XP values
Extreme Programming concentrates on five values.
Communication
Simplicity
Feedback
Courage
Respect
XP gives all developers a shared view of the system which matches the view held by the users rather through documentation in the traditional method. To this end, Extreme Programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback.
XP encourages starting with the Simplest solution. XP focuses on designing and coding for the needs of today instead of those of future. XP is not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed.
Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. Team can easily give the feedback to the user about new requirement due to simple system.
Several practices embody Courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary.
The Respect Value manifests in several ways. In Extreme Programming, team members respect each other because programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers.
Adopting four earlier values led to respect gained from others in team. Nobody on the team should feel unappreciated or ignored. This ensures high level of motivation and encourages loyalty toward the team, and the goal of the project. This value is very dependent upon the other values, and is very much oriented toward people in a team.
Ordering Technique in SQL:
We use ‘Order By’ function in SQL many times. Let's assume there are three Pay Codes—Personal Holiday, Vacation and Sick--and we want to sort the data in that order. Alphabetical sorting of departments will put Sick ahead of Vacation, so that's out.
This specific ordering can be accomplished by using the function ‘LOCATE’. In the first parameter, specify the name of the sort field. (I'll assume it's PAYCODE for this example.) The second parameter should contain a list of the Pay Codes. If the Pay Code field is fixed-length, be sure to pad each Pay Code name in the list-including the last one-with trailing blanks. Here's an example:
select * from PAYTABLE
order by locate(PAYCODE, 'PersonalHoliday Vacation Sick ')
This would sort the Pay Code in the requested order.
This specific ordering can be accomplished by using the function ‘LOCATE’. In the first parameter, specify the name of the sort field. (I'll assume it's PAYCODE for this example.) The second parameter should contain a list of the Pay Codes. If the Pay Code field is fixed-length, be sure to pad each Pay Code name in the list-including the last one-with trailing blanks. Here's an example:
select * from PAYTABLE
order by locate(PAYCODE, 'PersonalHoliday Vacation Sick ')
This would sort the Pay Code in the requested order.
Project Life Cycle Models:
The following are the standard project lifecycle models.
Pure Waterfall
This is the classical system development model. It consists of discontinuous phases:
1. Concept
2. Requirements
3. Architectural design
4. Detailed design
5. Coding and development
6. Testing and implementation
The pure waterfall performs well for products with clearly understood requirements or when working with well understood technical tools, architectures and infrastructures. Its weaknesses frequently make it inadvisable when rapid development is needed. In those cases, modified models may be more effective.
Spiral
The spiral is a risk-reduction oriented model that breaks a software project up into mini-projects, each addressing one or more major risks. After major risks have been addressed, the spiral model terminates as a waterfall model. Spiral iterations involve six steps:
1. Determine objectives, alternatives and constraints.
2. Identify and resolve risks.
3. Evaluate alternatives.
4. Develop the deliverables for that iteration and verify that they are correct.
5. Plan the next iteration.
6. Commit to an approach for the next iteration.
For projects with risky elements, it's beneficial to run a series of risk-reduction iterations which can be followed by a waterfall or other non-risk-based lifecycle.
Modified Waterfall
The modified waterfall uses the same phases as the pure waterfall, but is not done on a discontinuous basis. This enables the phases to overlap when needed. The pure waterfall can also split into subprojects at an appropriate phase (such as after the architectural design or detailed design).
Risk reduction spirals can be added to the top of the waterfall to reduce risks prior to the waterfall phases. The waterfall can be further modified using options such as prototyping, JADs or CRC sessions or other methods of requirements gathering done in overlapping phases.
Evolutionary Prototyping
Evolutionary prototyping uses multiple iterations of requirements gathering and analysis, design and prototype development. After each iteration, the result is analyzed by the customer. Their response creates the next level of requirements and defines the next iteration.
The manager must be vigilant to ensure it does not become an excuse to do code-and-fix development.
Code-and-Fix
If you don't use a methodology, it's likely you are doing code-and-fix. Code-and-fix rarely produces useful results. It is very dangerous as there is no way to assess progress, quality or risk.
Code-and-fix is only appropriate for small throwaway projects like proof-of-concept, short-lived demos or throwaway prototypes.
Staged Delivery
Although the early phases cover the deliverables of the pure waterfall, the design is broken into deliverables stages for detailed design, coding, testing and deployment.
For staged delivery, management must ensure that stages are meaningful to the customer. The technical team must account for all dependencies between different components of the system.
Evolutionary Delivery
Evolutionary delivery straddles evolutionary prototyping and staged delivery.
For evolutionary delivery, the initial emphasis should be on the core components of the system. This should consist of lower level functions which are unlikely to be changed by customer feedback.
Design-to-Schedule
Like staged delivery, design-to-schedule is a staged release model. However, the numbers of stages to be accomplished are not known at the outset of the project.
In design-to-schedule delivery, it is critical to prioritize features and plan stages so that the early stages contain the highest-priority features. Leave the lower priority features for later.
Design-to-Tools
When using a design-to-tools approach, the capability goes into a product only if it is directly supported by existing software tools. If it isn't supported, it gets left out. Besides architectural and functional packages, these tools can be code and class libraries, code generators, rapid-development languages and any other software tools that dramatically reduce implementation time.
Consider the tradeoffs of time-to-market versus lock-in and functionality compromises. This may be an appropriate approach for a high-risk element of the overall project or architecture.
Off-the-Shelf
Following requirements definition, analysis must be done to compare the package to the business, functional and architectural requirements.
It is critical to know how the desired features compare with the packaged set and if the package can be customized.
These standard models can be adapted to fit the industry issues, corporate culture, time constraints and team vulnerabilities which comprise your environment. We can customize a methodology to fit your needs or help you with a special or problem projects.
Pure Waterfall
This is the classical system development model. It consists of discontinuous phases:
1. Concept
2. Requirements
3. Architectural design
4. Detailed design
5. Coding and development
6. Testing and implementation
The pure waterfall performs well for products with clearly understood requirements or when working with well understood technical tools, architectures and infrastructures. Its weaknesses frequently make it inadvisable when rapid development is needed. In those cases, modified models may be more effective.
Spiral
The spiral is a risk-reduction oriented model that breaks a software project up into mini-projects, each addressing one or more major risks. After major risks have been addressed, the spiral model terminates as a waterfall model. Spiral iterations involve six steps:
1. Determine objectives, alternatives and constraints.
2. Identify and resolve risks.
3. Evaluate alternatives.
4. Develop the deliverables for that iteration and verify that they are correct.
5. Plan the next iteration.
6. Commit to an approach for the next iteration.
For projects with risky elements, it's beneficial to run a series of risk-reduction iterations which can be followed by a waterfall or other non-risk-based lifecycle.
Modified Waterfall
The modified waterfall uses the same phases as the pure waterfall, but is not done on a discontinuous basis. This enables the phases to overlap when needed. The pure waterfall can also split into subprojects at an appropriate phase (such as after the architectural design or detailed design).
Risk reduction spirals can be added to the top of the waterfall to reduce risks prior to the waterfall phases. The waterfall can be further modified using options such as prototyping, JADs or CRC sessions or other methods of requirements gathering done in overlapping phases.
Evolutionary Prototyping
Evolutionary prototyping uses multiple iterations of requirements gathering and analysis, design and prototype development. After each iteration, the result is analyzed by the customer. Their response creates the next level of requirements and defines the next iteration.
The manager must be vigilant to ensure it does not become an excuse to do code-and-fix development.
Code-and-Fix
If you don't use a methodology, it's likely you are doing code-and-fix. Code-and-fix rarely produces useful results. It is very dangerous as there is no way to assess progress, quality or risk.
Code-and-fix is only appropriate for small throwaway projects like proof-of-concept, short-lived demos or throwaway prototypes.
Staged Delivery
Although the early phases cover the deliverables of the pure waterfall, the design is broken into deliverables stages for detailed design, coding, testing and deployment.
For staged delivery, management must ensure that stages are meaningful to the customer. The technical team must account for all dependencies between different components of the system.
Evolutionary Delivery
Evolutionary delivery straddles evolutionary prototyping and staged delivery.
For evolutionary delivery, the initial emphasis should be on the core components of the system. This should consist of lower level functions which are unlikely to be changed by customer feedback.
Design-to-Schedule
Like staged delivery, design-to-schedule is a staged release model. However, the numbers of stages to be accomplished are not known at the outset of the project.
In design-to-schedule delivery, it is critical to prioritize features and plan stages so that the early stages contain the highest-priority features. Leave the lower priority features for later.
Design-to-Tools
When using a design-to-tools approach, the capability goes into a product only if it is directly supported by existing software tools. If it isn't supported, it gets left out. Besides architectural and functional packages, these tools can be code and class libraries, code generators, rapid-development languages and any other software tools that dramatically reduce implementation time.
Consider the tradeoffs of time-to-market versus lock-in and functionality compromises. This may be an appropriate approach for a high-risk element of the overall project or architecture.
Off-the-Shelf
Following requirements definition, analysis must be done to compare the package to the business, functional and architectural requirements.
It is critical to know how the desired features compare with the packaged set and if the package can be customized.
These standard models can be adapted to fit the industry issues, corporate culture, time constraints and team vulnerabilities which comprise your environment. We can customize a methodology to fit your needs or help you with a special or problem projects.
Rapid Application Development (RAD):
Rapid application development (RAD) is a software development process that involves iterative development, the construction of prototypes, and the use of Computer-aided software engineering (CASE) tools. It is described as a process through which the development cycle of an application is expedited. Rapid Application Development thus enables quality products to be developed faster, saving valuable resources.
Pros
1. Increased speed of development through methods including rapid prototyping, virtualization of system related routines, the use of CASE tools, and other techniques.
2. Decreased end-user functionality (arising from narrower design focus), hence reduced complexity
3. Larger emphasis on simplicity and usability of GUI design
Cons
1. Reduced Scalability, and reduced features when a RAD developed application starts as a prototype and evolves into a finished application
2. Reduced features occur due to time boxing when features are pushed to later versions in order to finish a release in a short amount of time
Some companies offer products that provide some or all of the tools for RAD software development. (The concept can be applied to hardware development as well.) These products include requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools. RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use.
Pros
1. Increased speed of development through methods including rapid prototyping, virtualization of system related routines, the use of CASE tools, and other techniques.
2. Decreased end-user functionality (arising from narrower design focus), hence reduced complexity
3. Larger emphasis on simplicity and usability of GUI design
Cons
1. Reduced Scalability, and reduced features when a RAD developed application starts as a prototype and evolves into a finished application
2. Reduced features occur due to time boxing when features are pushed to later versions in order to finish a release in a short amount of time
Some companies offer products that provide some or all of the tools for RAD software development. (The concept can be applied to hardware development as well.) These products include requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools. RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use.
MBROPT in CPYF:
Every one of us familiar with the CPYF command. We use MBROPT as *REPLACE to replace the records of the To file with the From file records. We use MBROPT as *ADD to add the records of the From File to the To File.
There is one less used option called *UPDADD. This option updates the duplicate records in the To file and adds the news records from the From file to the To file.
Example, consider a file ORDDTL of sales order details. Each line represents an item on the order and is uniquely keyed on order number and line number.
Order Line Item
Number Number Number Quantity
====== ====== ====== ========
1 1 I01 5
1 2 I08 3
2 1 I01 6
2 2 I09 6
3 1 I01 3
3 2 I07 3
3 3 I09 6
3 4 I02 6
4 1 I02 5
4 2 I08 5
4 3 I22 6
6 1 I01 8
Assume a second file ORDDTLCHGS of the same format with additional order lines and/or changes to existing order lines.
Order Line Item
Number Number Number Quantity
====== ====== ====== ========
5 1 I18 4
6 1 I01 12
7 1 I05 6
Notice that two lines, for orders 5 and 7, do not exist in the ORDDTL file. The record for order 6, line 1, contains a new quantity for that order line.
To apply the updates to ORDDTL, use the *UPDADD option, like this:
CPYF FROMFILE(ORDDTLCHGS) TOFILE(ORDDTL) MBROPT(*UPDADD)
Here's the resulting dataset.
Order Line Item
Number Number Number Quantity
====== ====== ====== ========
1 1 I01 5
1 2 I08 3
2 1 I01 6
2 2 I09 6
3 1 I01 3
3 2 I07 3
3 3 I09 6
3 4 I02 6
4 1 I02 5
4 2 I08 5
4 3 I22 6
6 1 I01 12
5 1 I18 4
7 1 I05 6
Notice that the quantity has changed for order 6, line 1, and the lines for orders 5 and 1 have been added.
There is one less used option called *UPDADD. This option updates the duplicate records in the To file and adds the news records from the From file to the To file.
Example, consider a file ORDDTL of sales order details. Each line represents an item on the order and is uniquely keyed on order number and line number.
Order Line Item
Number Number Number Quantity
====== ====== ====== ========
1 1 I01 5
1 2 I08 3
2 1 I01 6
2 2 I09 6
3 1 I01 3
3 2 I07 3
3 3 I09 6
3 4 I02 6
4 1 I02 5
4 2 I08 5
4 3 I22 6
6 1 I01 8
Assume a second file ORDDTLCHGS of the same format with additional order lines and/or changes to existing order lines.
Order Line Item
Number Number Number Quantity
====== ====== ====== ========
5 1 I18 4
6 1 I01 12
7 1 I05 6
Notice that two lines, for orders 5 and 7, do not exist in the ORDDTL file. The record for order 6, line 1, contains a new quantity for that order line.
To apply the updates to ORDDTL, use the *UPDADD option, like this:
CPYF FROMFILE(ORDDTLCHGS) TOFILE(ORDDTL) MBROPT(*UPDADD)
Here's the resulting dataset.
Order Line Item
Number Number Number Quantity
====== ====== ====== ========
1 1 I01 5
1 2 I08 3
2 1 I01 6
2 2 I09 6
3 1 I01 3
3 2 I07 3
3 3 I09 6
3 4 I02 6
4 1 I02 5
4 2 I08 5
4 3 I22 6
6 1 I01 12
5 1 I18 4
7 1 I05 6
Notice that the quantity has changed for order 6, line 1, and the lines for orders 5 and 1 have been added.
Software Configuration Management (SCM):
When computer software is built, change happens. And because it happens, controlling it effectively is required.
Software configuration management (SCM) is a set of activities that are designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling changes that are imposed, and auditing and reporting on the changes that are made.
In simple terms, SCM is a methodology to control and manage a software development project.
The goals of SCM are generally:
• Configuration Identification- What codes are we working with?
• Configuration Control- Controlling the release of a product and its changes.
• Status Accounting- Recording and reporting the status of components.
• Review- Ensuring completeness and consistency among components.
• Build Management- Managing the process and tools used for builds.
• Process Management- Ensuring adherence to the organization's development process.
• Environment Management- Managing the software and hardware that host our system.
• Teamwork- Facilitate team interactions related to the process.
• Defect Tracking- making sure every defect has traceability back to the source
There are many SCM tools available to manage a software projects. Some of the SCM tools for AS/400 are Implementer, Turnover, Aldon, CVS, etc.
Software configuration management (SCM) is a set of activities that are designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling changes that are imposed, and auditing and reporting on the changes that are made.
In simple terms, SCM is a methodology to control and manage a software development project.
The goals of SCM are generally:
• Configuration Identification- What codes are we working with?
• Configuration Control- Controlling the release of a product and its changes.
• Status Accounting- Recording and reporting the status of components.
• Review- Ensuring completeness and consistency among components.
• Build Management- Managing the process and tools used for builds.
• Process Management- Ensuring adherence to the organization's development process.
• Environment Management- Managing the software and hardware that host our system.
• Teamwork- Facilitate team interactions related to the process.
• Defect Tracking- making sure every defect has traceability back to the source
There are many SCM tools available to manage a software projects. Some of the SCM tools for AS/400 are Implementer, Turnover, Aldon, CVS, etc.
Simple way to send AS400 Objects to Remote system:
Many times we face the situation to send the AS400 Object to another system. If networking configuration is not available between the systems and we don’t want to use the tape, the simple approach to do the same is to follow the steps given below.
Create and Save objects to a Save File
Save the objects to a save file which we want to send. The primary reason to use a save file is that we can save any type of AS/400 object into the save file.
Use the AS/400 Create Save File (CRTSAVF) command to create a save file
CRTSAVF FILE(LIB/SAVFIL)
Then use the AS/400 Save Library (SAVLIB) or Save Object (SAVOBJ) commands to save objects to the save file
SAVLIB LIB(PGMLIB) DEV(*SAVF) SAVF(LIB/SAVFIL) TGTRLS(*PRV)
SAVOBJ OBJ(*ALL) LIB(PGMLIB) DEV(*SAVF) SAVF(LIB/SAVFIL) TGTRLS(*PRV)
FTP the Save File to the PC
The FTP commands used are,
OPEN – Used to open the FTP connection from the PC to the AS/400. Required entries are AS/400 IP address or host name, user ID and password.
BIN – Set the transfer type to binary (image). This is required when we are transferring a save file.
RCVF – Receive the save file to the PC. The prompts are remote file, to identify the save file on the AS/400, and local file, to identify where we want the copy of the save file placed.
Work with the Save File image on the PC
Once we have the save file image on the PC, we can work with it like any other PC file. Before sending the PC file to recipients, you can ZIP the file.
Prepare to upload the Save File image
Before the recipient can use FTP to upload the save file image to their AS/400 system, they need to run the CRTSAVF file command on their AS/400 to create a save file that will be uploaded to. The save file should be empty before starting the upload.
Uncompress the save file image if needed.
FTP the Save File to the AS/400
The FTP commands used are,
OPEN – opens the connection to the AS/400 system.
BIN – change the session to binary mode.
SEND – send the save file image from the PC to the AS/400 system.
Restore from the Save File
Now that the save file is restored to the recipient ’ s AS/400 system, they can use the AS/400 Restore Library (RSTLIB) or Restore Object (RSTOBJ) commands to restore from the save file. Similar to the SAVLIB and SAVOBJ commands, we can specify the device parameter as *SAVF.
Create and Save objects to a Save File
Save the objects to a save file which we want to send. The primary reason to use a save file is that we can save any type of AS/400 object into the save file.
Use the AS/400 Create Save File (CRTSAVF) command to create a save file
CRTSAVF FILE(LIB/SAVFIL)
Then use the AS/400 Save Library (SAVLIB) or Save Object (SAVOBJ) commands to save objects to the save file
SAVLIB LIB(PGMLIB) DEV(*SAVF) SAVF(LIB/SAVFIL) TGTRLS(*PRV)
SAVOBJ OBJ(*ALL) LIB(PGMLIB) DEV(*SAVF) SAVF(LIB/SAVFIL) TGTRLS(*PRV)
FTP the Save File to the PC
The FTP commands used are,
OPEN – Used to open the FTP connection from the PC to the AS/400. Required entries are AS/400 IP address or host name, user ID and password.
BIN – Set the transfer type to binary (image). This is required when we are transferring a save file.
RCVF – Receive the save file to the PC. The prompts are remote file, to identify the save file on the AS/400, and local file, to identify where we want the copy of the save file placed.
Work with the Save File image on the PC
Once we have the save file image on the PC, we can work with it like any other PC file. Before sending the PC file to recipients, you can ZIP the file.
Prepare to upload the Save File image
Before the recipient can use FTP to upload the save file image to their AS/400 system, they need to run the CRTSAVF file command on their AS/400 to create a save file that will be uploaded to. The save file should be empty before starting the upload.
Uncompress the save file image if needed.
FTP the Save File to the AS/400
The FTP commands used are,
OPEN – opens the connection to the AS/400 system.
BIN – change the session to binary mode.
SEND – send the save file image from the PC to the AS/400 system.
Restore from the Save File
Now that the save file is restored to the recipient ’ s AS/400 system, they can use the AS/400 Restore Library (RSTLIB) or Restore Object (RSTOBJ) commands to restore from the save file. Similar to the SAVLIB and SAVOBJ commands, we can specify the device parameter as *SAVF.
Service Oriented Architecture (SOA):
SOA is a business-centric IT architectural approach that supports integrating the business as linked, repeatable business tasks, or services. SOA helps users build composite applications, which are applications that draw upon functionality from multiple sources within and beyond the enterprise to support horizontal business processes.
In many organizations, the people who are delivering services are in different buildings, countries and time zones. They can even be in different companies when work is outsourced or performed by business partners. In such distributed development environments, it’s vital that the established SOA governance policies guide management of activities throughout the service lifecycle.
The IBM Rational® service-oriented architecture (SOA) solution provides consulting services, an SOA governance model, and service lifecycle management infrastructure and tools that, together, can help to realize the full potential of the SOA investments.
A major challenge in adopting an SOA is that many groups, both internal and external to the organization, contribute to the execution of strategic business processes. With an SOA, once-siloed data is now exposed as services and shared across departments, lines of business and even companies—raising concerns about decision rights and process measurement and control. Who makes a decision on whether a service can be accessible to other applications? Who should fund the shared service? Who owns it? How is it implemented? How do you determine whether it achieves expected results? Who’s responsible for fixing it if it breaks?
The Rational SOA solution enables to answer these questions with a holistic approach to SOA adoption that addresses both governance and management issues. A proven SOA governance model guides through the process of setting up the policies, procedures and processes required for efficient and effective decision making throughout the business and IT organization. Once established, the execution of the governance process is realized through a service lifecycle management infrastructure, which defines the end-to-end process of how services will be developed and managed throughout the enterprise.
Cause and Effect Diagram:
The cause and effect diagram is used to explore all the potential or real causes (or inputs) that result in a single effect (or output). Causes are arranged according to their level of importance or detail, resulting in a depiction of relationships and hierarchy of events. This can help in search for the root causes, identify areas where there may be problems, and compare the relative importance of different causes.
Causes in a cause & effect diagram are frequently arranged into four major categories.
• manpower, methods, materials, and machinery (recommended for manufacturing)
• Equipment, policies, procedures, and people (recommended for administration and service).
The cause-and-effect diagram is also called the Ishikawa diagram (after its creator, Kaoru Ishikawa of Japan), or the fishbone diagram (due to its shape).
Steps to use the Cause and Effect Diagram:
1. Identify the problem
Write down the exact problem in detail. Where appropriate identify who is involved, what the problem is, and when and where it occurs. Write the problem in a box on the left hand side of a large sheet of paper. Draw a line across the paper horizontally from the box. This arrangement, looking like the head and spine of a fish, gives space to develop ideas.
2. Work out the major factors involved
Next identify the factors that may contribute to the problem. Draw lines off the spine for each factor, and label it. These may be people involved with the problem, systems, equipment, materials, external forces, etc.
3. Identify possible cause
For each of the factors that are considered in stage 2, brainstorm possible causes of the problem that may be related to the factor. Show these as smaller lines coming off the 'bones' of the fish. Where a cause is large or complex, then it may be best to break it down into sub-causes. Show these as lines coming off each cause line.
4. Analyze your diagram
By this stage a diagram should have formed showing all the possible causes of the problem that we can think of. Depending on the complexity and importance of the problem, now investigate the most likely causes further. This may involve setting up investigations, carrying out surveys, etc. These will be designed to test whether the assessments are correct.
STRCPYSCN:
Often we have some scenarios where we would be required to demonstrate some functionality to a remote user operating the same AS400 Box. Though there are various third party tools like Sametime screen sharing etc., we also have the advantage of using the AS400 inbuilt command known as STRCPYSCN.
STRCPYSCN is a simple command that allows viewing or capturing all the screen output from an active 5250 session. So if we are in Hyderabad and onsite is in US, and both are running on the same i5 partition, we can easily see what the other is doing on his 5250 session by running the following command on the green-screen session.
STRCPYSCN SRCDEV(job_name) OUTDEV(*REQUESTER)
where job_name is the job name of the 5250 device session that we want to view.
After STRCPYSCN starts, i5/OS will show the following break message on the 5250 target screen that we are asking to monitor.
Type reply (if required), press Enter.
From . . . : JOE 05/10/06 20:53:47
Cause . . . . . : Start copy screen has been requested
with output to job_name. Reply C to prevent copy screen or
G to allow it. (C G)
Reply . . .
This message asks the target user for permission to monitor their session. If the user types in 'C', permission is denied and the command ends on the requesting session. If the user types in 'G', permission is granted and copies of the target session's 5250 screens will automatically be forwarded to receiver’s session, following along as the target user performs various i5/OS and menu commands.
There are a few things to be aware of, when running STRCPYSCN over a remote user's terminal session. First, receiver’s keyboard will be locked up during STRCPYSCN processing, and will not be able to run any commands on either the target 5250 session or on own session.
In addition, the forwarded screen captures on viewing machine will always be one screen behind those on the target machine. So the user will always have to be working one screen ahead in order to view the functions.
To end the user's STRCPYSCN session, run the End Copy Screen command (ENDCPYSCN).
While STRCPYSCN is not as elegant as other PC remote control programs, it serves the purpose of providing a quick and dirty way to troubleshoot 5250 green-screen problems without having any additional software loaded on the target machine. Besides helping troubleshooting, STRCPYSCN can also be used for demonstrating software functionality.
STRCPYSCN is a simple command that allows viewing or capturing all the screen output from an active 5250 session. So if we are in Hyderabad and onsite is in US, and both are running on the same i5 partition, we can easily see what the other is doing on his 5250 session by running the following command on the green-screen session.
STRCPYSCN SRCDEV(job_name) OUTDEV(*REQUESTER)
where job_name is the job name of the 5250 device session that we want to view.
After STRCPYSCN starts, i5/OS will show the following break message on the 5250 target screen that we are asking to monitor.
Type reply (if required), press Enter.
From . . . : JOE 05/10/06 20:53:47
Cause . . . . . : Start copy screen has been requested
with output to job_name. Reply C to prevent copy screen or
G to allow it. (C G)
Reply . . .
This message asks the target user for permission to monitor their session. If the user types in 'C', permission is denied and the command ends on the requesting session. If the user types in 'G', permission is granted and copies of the target session's 5250 screens will automatically be forwarded to receiver’s session, following along as the target user performs various i5/OS and menu commands.
There are a few things to be aware of, when running STRCPYSCN over a remote user's terminal session. First, receiver’s keyboard will be locked up during STRCPYSCN processing, and will not be able to run any commands on either the target 5250 session or on own session.
In addition, the forwarded screen captures on viewing machine will always be one screen behind those on the target machine. So the user will always have to be working one screen ahead in order to view the functions.
To end the user's STRCPYSCN session, run the End Copy Screen command (ENDCPYSCN).
While STRCPYSCN is not as elegant as other PC remote control programs, it serves the purpose of providing a quick and dirty way to troubleshoot 5250 green-screen problems without having any additional software loaded on the target machine. Besides helping troubleshooting, STRCPYSCN can also be used for demonstrating software functionality.
Business to Business (B2B):
B2B (business-to-business), also known as e-biz, is the exchange of products, services, or information between businesses rather than between businesses and consumers and it is performed in much higher volumes than business-to-consumer (B2C) applications. Retailers are typically B2C companies while manufacturers, wholesalers and other suppliers are typically B2B companies. For example a company selling photocopiers would likely be a B2B sales organization as opposed to a B2C sales organization.
B2B Web sites can be sorted into:
• Company Web sites, since the target audience for many company Web sites is other companies and their employees. Sometimes a company Web site serves as the entrance to an exclusive extranet available only to customers or registered site users. Some company Web sites sell directly from the site, effectively e-tailing to other businesses.
• Product supply and procurement exchanges, where a company purchasing agent can shop for supplies from vendors, request proposals, and, in some cases, bid to make a purchase at a desired price. Sometimes referred to as e-procurement sites, some serve a range of industries and others focus on a niche market.
• Specialized or vertical industry portals which provide a "subWeb" of information, product listings, discussion groups, and other features. These vertical portal sites have a broader purpose than the procurement sites (although they may also support buying and selling).
• Brokering sites that act as an intermediary between someone wanting a product or service and potential providers. Equipment leasing is an example.
• Information sites (sometimes known as infomediary), which provide information about a particular industry for its companies and their employees. These include specialized search sites and trade and industry standards organization sites.
Many B2B sites may seem to fall into more than one of these groups. Models for B2B sites are still evolving.
B2B Web sites can be sorted into:
• Company Web sites, since the target audience for many company Web sites is other companies and their employees. Sometimes a company Web site serves as the entrance to an exclusive extranet available only to customers or registered site users. Some company Web sites sell directly from the site, effectively e-tailing to other businesses.
• Product supply and procurement exchanges, where a company purchasing agent can shop for supplies from vendors, request proposals, and, in some cases, bid to make a purchase at a desired price. Sometimes referred to as e-procurement sites, some serve a range of industries and others focus on a niche market.
• Specialized or vertical industry portals which provide a "subWeb" of information, product listings, discussion groups, and other features. These vertical portal sites have a broader purpose than the procurement sites (although they may also support buying and selling).
• Brokering sites that act as an intermediary between someone wanting a product or service and potential providers. Equipment leasing is an example.
• Information sites (sometimes known as infomediary), which provide information about a particular industry for its companies and their employees. These include specialized search sites and trade and industry standards organization sites.
Many B2B sites may seem to fall into more than one of these groups. Models for B2B sites are still evolving.
Capability Maturity Model (CMM):
The Capability Maturity Model (CMM) is a way to develop and refine an organization's processes. A maturity model can be used as a benchmark for assessing different organizations for equivalent comparison. The model describes the maturity of the company based upon the project the company is handling and the related clients.
The CMM framework is a method for organizing these steps into five levels of maturity that lay successive foundations to support short and long-term process improvement initiatives.
1. The process capability at Level 1 is considered ad hoc because the software development process constantly changes as the work progresses.
2. The capability of Level 2 organizations is summarized as disciplined, because the ability to successfully repeat planning and tracking of earlier projects results in stability.
3. The capability of Level 3 organizations is summarized as standard and consistent because engineering and management activities are stable and repeatable.
4. The capability of Level 4 organizations is summarized as predictable because the process is measured and operates within measurable limits.
5. The capability of Level 5 organizations is characterized as continuously improving, because projects strive to improve the process capability and process performance.
The five maturity levels define an ordinal scale that enables an organization to determine its level of process capability. The framework is also an aid to quality planning as it affords organizations the opportunity to prioritize improvement efforts.
Full Outer Join:
Consider that we need to join two physical files using SQL. It's possible that some records in the first file won't have matches in the second file. It's also possible that some records in the second file won't have matches in the first file. If we require getting all the records in both file then the question of ‘What type of join to use’ arises.
The formula that we can use for it is,
Left outer join + Right Exception join = Full Outer join.
Consider the following example.
Consider that we have Faculty Master Table and Schedule table. We need a list of the professors and the classes each one has been assigned to teach. Some professors have not yet been assigned to teach any classes. Some classes have not yet been assigned to a professor. We would require a full Schedule list.
Faculty Table
Professor ID Name
P01 Cake, Patty
P02 Dover, Ben
P03 Flett, Pam
Schedule:
Class ID Period Building Room Instructor
101 A 41 320 P02
102 A 41 218 P03
103 B 41 212 P02
104 B 42 302 NULL
105 C 41 165 P04
Notice a few things:
• No classes have been assigned to instructor P01.
• Class 104 has not been assigned to an instructor.
• Class 105 has been assigned to non-existent instructor P04.
Here's the join:
SELECT f.FacID, f.Name, s.classID, s.period, s.Building, s.Room
FROM Faculty AS f
LEFT JOIN Schedule AS s
ON f.FacID = s.Instructor
UNION
SELECT s.Instructor, f.Name, s.classID, s.period, s.Building, s.Room
FROM Faculty AS f
RIGHT EXCEPTION JOIN Schedule AS s
ON f.FacID = s.Instructor
Here is the result set.
Instructor ID Instructor Class Period Building Room
P01 Cake, Patty NULL NULL NULL NULL
P02 Dover, Ben 101 A 41 320
P02 Dover, Ben 103 B 41 212
P03 Flett, Pam 102 A 41 218
NULL NULL 104 B 42 302
P04 NULL 105 C 41 165
The formula that we can use for it is,
Left outer join + Right Exception join = Full Outer join.
Consider the following example.
Consider that we have Faculty Master Table and Schedule table. We need a list of the professors and the classes each one has been assigned to teach. Some professors have not yet been assigned to teach any classes. Some classes have not yet been assigned to a professor. We would require a full Schedule list.
Faculty Table
Professor ID Name
P01 Cake, Patty
P02 Dover, Ben
P03 Flett, Pam
Schedule:
Class ID Period Building Room Instructor
101 A 41 320 P02
102 A 41 218 P03
103 B 41 212 P02
104 B 42 302 NULL
105 C 41 165 P04
Notice a few things:
• No classes have been assigned to instructor P01.
• Class 104 has not been assigned to an instructor.
• Class 105 has been assigned to non-existent instructor P04.
Here's the join:
SELECT f.FacID, f.Name, s.classID, s.period, s.Building, s.Room
FROM Faculty AS f
LEFT JOIN Schedule AS s
ON f.FacID = s.Instructor
UNION
SELECT s.Instructor, f.Name, s.classID, s.period, s.Building, s.Room
FROM Faculty AS f
RIGHT EXCEPTION JOIN Schedule AS s
ON f.FacID = s.Instructor
Here is the result set.
Instructor ID Instructor Class Period Building Room
P01 Cake, Patty NULL NULL NULL NULL
P02 Dover, Ben 101 A 41 320
P02 Dover, Ben 103 B 41 212
P03 Flett, Pam 102 A 41 218
NULL NULL 104 B 42 302
P04 NULL 105 C 41 165
Email Etiquette:
Email etiquette refers to a set of dos and don’ts that are recommended by business and communication experts to use email effectively or appropriately. Email etiquette offers some guidelines that all writers can use to facilitate better communication between themselves and their readers.
Few tips for writing better Emails:
The email subject should be detailed enough to give the recipient an idea about the email content without having to open it.
Be concise and to the point
Try to avoid abbreviations and field-specific jargon so that your recipient may understand you.
Never use capital letters while typing and email message to anyone. For starters, caps are considered impolite and resemble shouting in speech.
Avoid long sentences
Keep your language gender neutral
Use active instead of passive
Do not write an email while you are in a really bad mood. It would reflect on the style of your writing.
Do not use email to discuss confidential information
Do not overuse Reply to All
Always reply to emails especially the ones specifically addressed to you. The sender is still waiting to hear from you.
Always do spell check to your email prior to sending it to be sure that the message sent is free of grammatical, vocabulary and appropriate usage errors.
If you have to email more than two documents as attachments, zip them in one file. Doing so would ensure that your recipient won't miss downloading any file.
Do not request a Read Notification Receipt.
Few tips for writing better Emails:
The email subject should be detailed enough to give the recipient an idea about the email content without having to open it.
Be concise and to the point
Try to avoid abbreviations and field-specific jargon so that your recipient may understand you.
Never use capital letters while typing and email message to anyone. For starters, caps are considered impolite and resemble shouting in speech.
Avoid long sentences
Keep your language gender neutral
Use active instead of passive
Do not write an email while you are in a really bad mood. It would reflect on the style of your writing.
Do not use email to discuss confidential information
Do not overuse Reply to All
Always reply to emails especially the ones specifically addressed to you. The sender is still waiting to hear from you.
Always do spell check to your email prior to sending it to be sure that the message sent is free of grammatical, vocabulary and appropriate usage errors.
If you have to email more than two documents as attachments, zip them in one file. Doing so would ensure that your recipient won't miss downloading any file.
Do not request a Read Notification Receipt.
Better way to send Break messages to all Active users:
If we want to send a message to all signed-on interactive 5250 users, asking them to perform a specific function (i.e., get off the system, exit a program, etc). The requirement is to send only to signed on users occurs because some i5/OS functions, such as the Send Break Message command (SNDBRKMSG), will deliver the message to every system workstation message queue even if there is no signed on user accessing that device. Delivering messages to all terminal message queues can cause confusion when users sign on to an unused terminal at a later time and find an old message that is no longer relevant to the system. The goal is to avoid confusion by only sending break messages to currently signed-on users.
The message must also be delivered as an immediate message which each user will receive in break mode (where the message will automatically display on the user's screen, regardless of what the user is doing). This is required so that the user will see the message as soon as it is received. IBM offers the following two ways to accomplish this task in i5/OS.
By Menu: The OS/400 Operational Assistance Menu (GO ASSIST) provides a Send Message option to send messages to individual users, to all users enrolled in the system, or to all active users.
Send a Message
Type information below, then press F10 to send.
Message needs reply . . . . . . N Y=Yes, N=No
Interrupt user . . . . . . . . . Y Y=Yes, N=No
Message text . . . . . . . . . . Type your message here
Send to . . . . . . . . . . . . *ALLACT Name, F4 for list
F1=Help F3=Exit F10=Send F12=Cancel
By API: i5/OS includes the Send Message (QEZSNDMSG) API, which allows to embed a program call within another program or a command, so that the message can automatically be sent without manual input.
The message must also be delivered as an immediate message which each user will receive in break mode (where the message will automatically display on the user's screen, regardless of what the user is doing). This is required so that the user will see the message as soon as it is received. IBM offers the following two ways to accomplish this task in i5/OS.
By Menu: The OS/400 Operational Assistance Menu (GO ASSIST) provides a Send Message option to send messages to individual users, to all users enrolled in the system, or to all active users.
Send a Message
Type information below, then press F10 to send.
Message needs reply . . . . . . N Y=Yes, N=No
Interrupt user . . . . . . . . . Y Y=Yes, N=No
Message text . . . . . . . . . . Type your message here
Send to . . . . . . . . . . . . *ALLACT Name, F4 for list
F1=Help F3=Exit F10=Send F12=Cancel
By API: i5/OS includes the Send Message (QEZSNDMSG) API, which allows to embed a program call within another program or a command, so that the message can automatically be sent without manual input.
Return on Investment (ROI):
ROI is a performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is divided by the cost of the investment; the result is expressed as a percentage or a ratio.
ROI = Gain From Investment - Cost of Investment
---------------------------------------------
Cost of Investment
Traditionally, when IT professionals and top-management discuss the ROI of an IT investment, they were mostly thinking of “financial” benefits. Today, business leaders and technologists also consider the “non financial” benefits of IT investments.
Financial Benefits include impacts on the organization's budget and finances, e.g., cost reductions or revenue increases.
Non Financial Benefits include impacts on operations or mission performance and results, e.g., improved customer satisfaction, better information, shorter cycle-time.
In reality, most organizations use one or more “financial metrics” which they refer to individually or collectively as “ROI”. These metrics include:
Payback Period: The amount of time required for the benefits to pay back the cost of the project.
Net Present Value (NPV): The value of future benefits restated in terms of today’s money.
Internal Rate of Return (IRR): The benefits restated as an interest rate.
Return on investment is a very popular metric because of its versatility and simplicity. That is, if an investment does not have a positive ROI, or if there are other opportunities with a higher ROI, then the investment should be not be undertaken. The calculation for return on investment can be modified to suit the situation -it all depends on what is to be included as returns and costs. The term in the broadest sense just attempts to measure the profitability of an investment and, as such, there is no one "right" calculation.
ROI = Gain From Investment - Cost of Investment
---------------------------------------------
Cost of Investment
Traditionally, when IT professionals and top-management discuss the ROI of an IT investment, they were mostly thinking of “financial” benefits. Today, business leaders and technologists also consider the “non financial” benefits of IT investments.
Financial Benefits include impacts on the organization's budget and finances, e.g., cost reductions or revenue increases.
Non Financial Benefits include impacts on operations or mission performance and results, e.g., improved customer satisfaction, better information, shorter cycle-time.
In reality, most organizations use one or more “financial metrics” which they refer to individually or collectively as “ROI”. These metrics include:
Payback Period: The amount of time required for the benefits to pay back the cost of the project.
Net Present Value (NPV): The value of future benefits restated in terms of today’s money.
Internal Rate of Return (IRR): The benefits restated as an interest rate.
Return on investment is a very popular metric because of its versatility and simplicity. That is, if an investment does not have a positive ROI, or if there are other opportunities with a higher ROI, then the investment should be not be undertaken. The calculation for return on investment can be modified to suit the situation -it all depends on what is to be included as returns and costs. The term in the broadest sense just attempts to measure the profitability of an investment and, as such, there is no one "right" calculation.
PDCA Cycle:
The plan–do–check–act cycle (Figure 1) is a four-step model for carrying out change. Just as a circle has no end, the PDCA cycle should be repeated again and again for continuous improvement. It is also called PDCA, plan–do–study–act (PDSA) cycle, Deming cycle, and Shewhart cycle.
Figure 1: Plan-do-check-act cycle
PLAN
Establish the objectives and processes necessary to deliver results in accordance with the specifications.
DO
Implement the processes.
CHECK
Monitor and evaluate the processes and results against objectives and Specifications and report the outcome.
ACT
Apply actions to the outcome for necessary improvement. This means reviewing all steps (Plan, Do, Check, Act) and modifying the process to improve it before its next implementation.
It can be used
As a model for continuous improvement.
When starting a new improvement project.
When developing a new or improved design of a process, product or service.
When defining a repetitive work process.
When planning data collection and analysis in order to verify and prioritize problems or root causes.
When implementing any change.
RUNSQLSTM:
The RUNSQLSTM command is a CL command that reads and processes SQL statements stored in a source member. The statements in the source member can be run without compiling. This allows static SQL statements or dynamically generated SQL statements to be run without the need for embedding them in a high-level language such as RPG.
The RUNSQLSTM can run a series of SQL statements, but it is limited to a subset of standard SQL statements. That is, as many SQL statements can be embedded in a single source member as necessary to get the job done. The only real shortcoming in RUNSQLSTM is the lack of support for the SELECT statement.
A typical source member, containing SQL statements for use by RUNSQLSTM would be as follows:
Source File: Mylib/mySrcFile(mySQLstuff)
0001 -- First do the update
0002 UPDATE custmast SET credit = 100.00 where credit = 0.00;
0003 CREATE VIEW custcredit AS select custno, credit, slsreg
0004 where credit > 100.00; /* Create Logical View */
RUNSQLSTM SRCFILE(mylib/mysrcfile) srcmbr(mySQLstuff) COMMIT(*NONE)
Line 1 is a comment. The – indicates that everything after those two characters is a comment.
Line 2 is an SQL UPDATE statement. Note that SQL statements must end with a semicolon.
Line 3 is a CREATE VIEW statement. This creates an SQL view, or "Logical File" on the AS/400. It is continued on to the 4th line.
Line 4 is a continuation of line 3. Note that there is also a second style of comment on line 4. This is the CL style comment. Line 4 also includes the ending semicolon after the SQL statement.
Since RUNSQLSTM doesn't use CL style continuation, the semicolon is required to end all SQL statements.
There is an output listing when RUNSQLSTM runs the SQL statements. It is sent to QSYSPRT unless another print file is specified as the output file in the PRTFILE (Print File) parameter.
The RUNSQLSTM can run a series of SQL statements, but it is limited to a subset of standard SQL statements. That is, as many SQL statements can be embedded in a single source member as necessary to get the job done. The only real shortcoming in RUNSQLSTM is the lack of support for the SELECT statement.
A typical source member, containing SQL statements for use by RUNSQLSTM would be as follows:
Source File: Mylib/mySrcFile(mySQLstuff)
0001 -- First do the update
0002 UPDATE custmast SET credit = 100.00 where credit = 0.00;
0003 CREATE VIEW custcredit AS select custno, credit, slsreg
0004 where credit > 100.00; /* Create Logical View */
RUNSQLSTM SRCFILE(mylib/mysrcfile) srcmbr(mySQLstuff) COMMIT(*NONE)
Line 1 is a comment. The – indicates that everything after those two characters is a comment.
Line 2 is an SQL UPDATE statement. Note that SQL statements must end with a semicolon.
Line 3 is a CREATE VIEW statement. This creates an SQL view, or "Logical File" on the AS/400. It is continued on to the 4th line.
Line 4 is a continuation of line 3. Note that there is also a second style of comment on line 4. This is the CL style comment. Line 4 also includes the ending semicolon after the SQL statement.
Since RUNSQLSTM doesn't use CL style continuation, the semicolon is required to end all SQL statements.
There is an output listing when RUNSQLSTM runs the SQL statements. It is sent to QSYSPRT unless another print file is specified as the output file in the PRTFILE (Print File) parameter.
Customer Relationship Management:
Customer relationship management (CRM) is a broad term that covers concepts used by companies to manage their relationships with customers, including the capture, storage and analysis of customer information designed to reduce costs and increase profitability by strengthening customer loyalty.
There are three aspects of CRM which can each be implemented in isolation from each other:
Operational CRM- automation or support of customer processes that include a company’s sales or service representative
Collaborative CRM- direct communication with customers that does not include a company’s sales or service representative (“self service”)
Analytical CRM- analysis of customer data for a broad range of purposes
A typical CRM system is subdivided into three basic sub modules:
Marketing
Sales
Service
Marketing sub module primarily deals with providing functionalities of Long-term planning and Short-term execution of marketing related Activities within an organization.
Sales functionalities are focused on helping the Sales team to execute and manage the presales process better and in an organized manner. Sales team is responsible for regularly capturing key customer interactions, any leads or opportunities they are working on etc, in CRM system. The system helps by processing this data, monitoring against the targets and proactively alerting the sales person with recommended further actions based on company's sales policy.
Service related functionalities are focused on effectively managing the customer service (Planned or Unplanned), avoid "leakage" of Warranty based services, avoid "Penalties" arising due to Non conformity of SLA (Service Level Agreements), and provide first and Second Level support to Customers.
Several commercial CRM software packages are available which vary in their approach to CRM. However, CRM is not just a technology, but rather a holistic approach to an organization's philosophy in dealing with its customers. This includes policies and processes, front-of-house customer service, employee training, marketing, systems and information management.
The objectives of a CRM strategy must consider a company’s specific situation and its customer’s needs and expectations.
There are three aspects of CRM which can each be implemented in isolation from each other:
Operational CRM- automation or support of customer processes that include a company’s sales or service representative
Collaborative CRM- direct communication with customers that does not include a company’s sales or service representative (“self service”)
Analytical CRM- analysis of customer data for a broad range of purposes
A typical CRM system is subdivided into three basic sub modules:
Marketing
Sales
Service
Marketing sub module primarily deals with providing functionalities of Long-term planning and Short-term execution of marketing related Activities within an organization.
Sales functionalities are focused on helping the Sales team to execute and manage the presales process better and in an organized manner. Sales team is responsible for regularly capturing key customer interactions, any leads or opportunities they are working on etc, in CRM system. The system helps by processing this data, monitoring against the targets and proactively alerting the sales person with recommended further actions based on company's sales policy.
Service related functionalities are focused on effectively managing the customer service (Planned or Unplanned), avoid "leakage" of Warranty based services, avoid "Penalties" arising due to Non conformity of SLA (Service Level Agreements), and provide first and Second Level support to Customers.
Several commercial CRM software packages are available which vary in their approach to CRM. However, CRM is not just a technology, but rather a holistic approach to an organization's philosophy in dealing with its customers. This includes policies and processes, front-of-house customer service, employee training, marketing, systems and information management.
The objectives of a CRM strategy must consider a company’s specific situation and its customer’s needs and expectations.
System Development Life Cycle:
SDLC is the process of developing information systems through investigation, analysis, design, implementation and maintenance. This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This model has the following activities.
1. System/Information Engineering and Modeling
As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system.
2. Software Requirement Analysis
This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focused specially on software. The essential purpose of this phase is to find the need and to define the problem that needs to be solved.
3. System Analysis and Design
In this phase, the software's overall structure is defined. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.
4. Code Generation
The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. With respect to the type of application, the right programming language is chosen.
5. Testing
Once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations.
6. MaintenanceThe software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.
1. System/Information Engineering and Modeling
As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system.
2. Software Requirement Analysis
This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focused specially on software. The essential purpose of this phase is to find the need and to define the problem that needs to be solved.
3. System Analysis and Design
In this phase, the software's overall structure is defined. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.
4. Code Generation
The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. With respect to the type of application, the right programming language is chosen.
5. Testing
Once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations.
6. MaintenanceThe software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.
Creating Help for AS/400 Commands:
A simpler method to create help for AS/400 Commands is to use the command ‘Generate Command Documentation’ (GENCMDDOC).
This command can be used to create a UIM (User Interface Manager) source that acts as a template for the command’s help.
The command is shown below.
GENCMDDOC CMD(MYLIB/MYCMD)
TODIR('/QSYS.LIB/MYLIB.LIB/QPNLSRC.FILE')
TOSTMF(*CMD) GENOPT(*UIM)
In the example, the command retrieves information from the command object named MYCMD in library MYLIB and generates UIM source into member MYCMD of source file QPNLSRC in library MYLIB.
After the template UIM source has been generated, the UIM source needs to be edited. <...> marker in the UIM source needs to be replaced with the appropriate text. The parameter descriptions can also be edited. List of messages to contain the actual message identifiers of messages signaled from the command, along with the message file that contains the message descriptions can also be provided.
Once the UIM source has been edited to tailor to the command, the help panel group can be created by using the Create Panel Group (CRTPNLGRP) command. The following is an example of using the Create Panel Group (CRTPNLGRP) command:
CRTPNLGRP PNLGRP(MYLIB/MYCMD)
SRCFILE(MYLIB/QPNLSRC) SRCMBR(MYCMD)
This command attempts to create a panel group from the UIM source in member MYCMD in source physical file QPNLSRC in library MYLIB. If there are no severe errors found when compiling the UIM source, a panel group (*PNLGRP) object named MYCMD is created in library MYCMD. The command generates a spooled file, which can be viewed to see informational, warning, and severe errors found by the UIM compiler.
Once the Panel group is created, it can be associated to the command while creating the command.
CRTCMD CMD(MYLIB/MYCMD) PGM(MYLIB/MYPGM) SRCFILE(MYLIB/QPNLSRC)
HLPPNLGRP(MYLIB/MYCMD) HLPID(*CMD)
This command can be used to create a UIM (User Interface Manager) source that acts as a template for the command’s help.
The command is shown below.
GENCMDDOC CMD(MYLIB/MYCMD)
TODIR('/QSYS.LIB/MYLIB.LIB/QPNLSRC.FILE')
TOSTMF(*CMD) GENOPT(*UIM)
In the example, the command retrieves information from the command object named MYCMD in library MYLIB and generates UIM source into member MYCMD of source file QPNLSRC in library MYLIB.
After the template UIM source has been generated, the UIM source needs to be edited. <...> marker in the UIM source needs to be replaced with the appropriate text. The parameter descriptions can also be edited. List of messages to contain the actual message identifiers of messages signaled from the command, along with the message file that contains the message descriptions can also be provided.
Once the UIM source has been edited to tailor to the command, the help panel group can be created by using the Create Panel Group (CRTPNLGRP) command. The following is an example of using the Create Panel Group (CRTPNLGRP) command:
CRTPNLGRP PNLGRP(MYLIB/MYCMD)
SRCFILE(MYLIB/QPNLSRC) SRCMBR(MYCMD)
This command attempts to create a panel group from the UIM source in member MYCMD in source physical file QPNLSRC in library MYLIB. If there are no severe errors found when compiling the UIM source, a panel group (*PNLGRP) object named MYCMD is created in library MYCMD. The command generates a spooled file, which can be viewed to see informational, warning, and severe errors found by the UIM compiler.
Once the Panel group is created, it can be associated to the command while creating the command.
CRTCMD CMD(MYLIB/MYCMD) PGM(MYLIB/MYPGM) SRCFILE(MYLIB/QPNLSRC)
HLPPNLGRP(MYLIB/MYCMD) HLPID(*CMD)
Yet another way to build CSV File
SQL presents an easy way to create CSV files. Use the CHAR function to convert numeric fields to alpha format. SQL puts in the necessary minus signs and decimal points. Concatenate all the fields together to get one big comma-delimited output field.
The following SQL command is an example of the technique. Use Qshell to retrieve the data and load it into a file in the Integrated File System (IFS).
db2 "SELECT char(CUSNUM)','LSTNAM','INIT','
CITY','STATE','char(baldue) from qgpl.qcustcdt"
sed -n '/,/p' >> custdata.CSV
The CSV file looks like this:
938472 ,Henning ,G K,Dallas,TX,37.00
839283 ,Jones ,B D,Clay ,NY,500.00
392859 ,Vine ,S S,Broton,VT,439.00
938485 ,Johnson ,J A,Helen ,GA,3987.50
397267 ,Tyron ,W E,Hector,NY,.00
389572 ,Stevens ,K L,Denver,CO,58.75
846283 ,Alison ,J S,Isle ,MN,10.00
475938 ,Doe ,J W,Sutter,CA,250.00
693829 ,Thomas ,A N,Casper,WY,.00
593029 ,Williams,E D,Dallas,TX,25.00
192837 ,Lee ,F L,Hector,NY,489.50
583990 ,Abraham ,M T,Isle ,MN,500.00
It isn't necessary to run SQL under Qshell, but doing so sure makes it easy to build an IFS file.
The following SQL command is an example of the technique. Use Qshell to retrieve the data and load it into a file in the Integrated File System (IFS).
db2 "SELECT char(CUSNUM)','LSTNAM','INIT','
CITY','STATE','char(baldue) from qgpl.qcustcdt"
sed -n '/,/p' >> custdata.CSV
The CSV file looks like this:
938472 ,Henning ,G K,Dallas,TX,37.00
839283 ,Jones ,B D,Clay ,NY,500.00
392859 ,Vine ,S S,Broton,VT,439.00
938485 ,Johnson ,J A,Helen ,GA,3987.50
397267 ,Tyron ,W E,Hector,NY,.00
389572 ,Stevens ,K L,Denver,CO,58.75
846283 ,Alison ,J S,Isle ,MN,10.00
475938 ,Doe ,J W,Sutter,CA,250.00
693829 ,Thomas ,A N,Casper,WY,.00
593029 ,Williams,E D,Dallas,TX,25.00
192837 ,Lee ,F L,Hector,NY,489.50
583990 ,Abraham ,M T,Isle ,MN,500.00
It isn't necessary to run SQL under Qshell, but doing so sure makes it easy to build an IFS file.
Subscribe to:
Posts (Atom)