Overview:-
Client-server is a computing architecture which separates a client from a server. Each client or server connected to a network can also be referred to as a node. The most basic type of client-server architecture employs only two types of nodes: clients and servers. This type of architecture is sometimes referred to as two-tier.
Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.
These days, clients are most often web browsers. Servers typically include web servers, database servers and mail servers. Online gaming is usually client-server too.
Characteristics of a client:-
• Request sender is known as client
• Initiates requests
• Waits and receives replies.
• Usually connects to a small number of servers at one time
• Typically interacts directly with end-users using a graphical user interface
Characteristics of a server:-
• Receiver of request which is send by client
• Upon receipt of requests, processes them and then serves replies
• Usually accepts connections from a large number of clients
• Typically does not interact directly with end-users
The following are the examples of client/server architectures:-
1) Two tier architectures
In two tier client/server architectures, the user interface is placed at user's desktop environment and the database management system services are usually in a server that is a more powerful machine that provides services to the many clients. Information processing is split between the user system interface environment and the database management server environment. The database management server supports for stored procedures and triggers. Software vendors provide tools to simplify development of applications for the two tier client/server architecture.
2) Multi-tiered architecture
Some designs are more sophisticated and consist of three different kinds of nodes: clients, application servers which process data for the clients and database servers which store data for the application servers. This configuration is called three-tier architecture, and is the most commonly used type of client-server architecture. Designs that contain more than two tiers are referred to as multi-tiered or n-tiered.
The advantage of n-tiered architectures is that they are far more scalable, since they balance and distribute the processing load among multiple, often redundant, specialized server nodes. This in turn improves overall system performance and reliability, since more of the processing load can be accommodated simultaneously.
The disadvantages of n-tiered architectures include more load on the network itself, due to a greater amount of network traffic, more difficult to program and test than in two-tier architectures because more devices have to communicate in order to complete a client's request.
Advantages of client-server architecture:-
In most cases, client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change. This independence from change is also referred to as encapsulation.
All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.
Since data storage is centralized, updates to those data are far easier to administer.
It functions with multiple different clients of different capabilities.
Disadvantages of client-server architecture :-
Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become severely overloaded.
The client-server paradigm lacks the robustness. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled.
Friday, December 28, 2007
Wednesday, December 26, 2007
Translating from ASCII to EBCDIC:
Almost all network communications use the ASCII character set, but the AS/400 natively uses the EBCDIC character set. Clearly, once we're sending and receiving data over the network, we'll need to be able to translate between the two.
There are many different ways to translate between ASCII and EBCDIC. The API that we'll use to do this is called QDCXLATE, and you can find it in IBM's information center at the following link: http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924/info/apis/QDCXLATE.htm
There are other APIs that can be used to do these conversions. In particular, the iconv() set of APIs does really a good job, however, QDCXLATE is the easiest to use, and will work just fine for our purposes.
The QDCXLATE API takes the following parameters:
Parm# Description Usage Data Type
1 Length of data to convert Input Packed (5,0)
2 Data to convert I/O Char (*)
3 Conversion table Input Char (10)
And, since QDCXLATE is an OPM API, we actually call it as a program. Traditionally, you'd call an OPM API with the RPG 'CALL' statement, like this:
C CALL 'QDCXLATE'
C PARM 128 LENGTH 5 0
C PARM DATA 128
C PARM 'QTCPEBC' TABLE 10
However, I find it easier to code program calls using prototypes, just as I use for procedure calls. So, when I call QDCXLATE, I will use the following syntax:
D Translate PR ExtPgm('QDCXLATE')
D Length 5P 0 const
D Data 32766A options(*varsize)
D Table 10A const
C callp Translate(128: Data: 'QTCPEBC')
There are certain advantages to using the prototyped call. The first, and most obvious, is that each time we want to call the program, we can do it in one line of code. The next is that the 'const' keyword allows the compiler to automatically convert expressions or numeric variables to the data type required by the call. Finally, the prototype allows the compiler to do more thorough syntax checking when calling the procedure.
There are two tables that we will use in our examples, QTCPASC and QTCPEBC. These tables are easy to remember if we just keep in mind that the table name specifies the character set that we want to translate the data into. In other words 'QTCPEBC' is the IBM-supplied table for translating TCP to EBCDIC (from ASCII) and QTCPASC is the IBM supplied table for translating TCP data to ASCII (from EBCDIC).
There are many different ways to translate between ASCII and EBCDIC. The API that we'll use to do this is called QDCXLATE, and you can find it in IBM's information center at the following link: http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924/info/apis/QDCXLATE.htm
There are other APIs that can be used to do these conversions. In particular, the iconv() set of APIs does really a good job, however, QDCXLATE is the easiest to use, and will work just fine for our purposes.
The QDCXLATE API takes the following parameters:
Parm# Description Usage Data Type
1 Length of data to convert Input Packed (5,0)
2 Data to convert I/O Char (*)
3 Conversion table Input Char (10)
And, since QDCXLATE is an OPM API, we actually call it as a program. Traditionally, you'd call an OPM API with the RPG 'CALL' statement, like this:
C CALL 'QDCXLATE'
C PARM 128 LENGTH 5 0
C PARM DATA 128
C PARM 'QTCPEBC' TABLE 10
However, I find it easier to code program calls using prototypes, just as I use for procedure calls. So, when I call QDCXLATE, I will use the following syntax:
D Translate PR ExtPgm('QDCXLATE')
D Length 5P 0 const
D Data 32766A options(*varsize)
D Table 10A const
C callp Translate(128: Data: 'QTCPEBC')
There are certain advantages to using the prototyped call. The first, and most obvious, is that each time we want to call the program, we can do it in one line of code. The next is that the 'const' keyword allows the compiler to automatically convert expressions or numeric variables to the data type required by the call. Finally, the prototype allows the compiler to do more thorough syntax checking when calling the procedure.
There are two tables that we will use in our examples, QTCPASC and QTCPEBC. These tables are easy to remember if we just keep in mind that the table name specifies the character set that we want to translate the data into. In other words 'QTCPEBC' is the IBM-supplied table for translating TCP to EBCDIC (from ASCII) and QTCPASC is the IBM supplied table for translating TCP data to ASCII (from EBCDIC).
Tuesday, December 25, 2007
Basics of UML:
In the field of software engineering, the Unified Modeling Language (UML) is a standardized specification language for object modeling. UML is a general-purpose modeling language that includes a graphical notation used to create an abstract model of a system, referred to as a UML model.
UML diagrams represent three different views of a system model:
Functional requirements view
Emphasizes the functional requirements of the system from the user's point of view.
Includes use case diagrams.
Static structural view
Emphasizes the static structure of the system using objects, attributes, operations, and relationships.
Includes class diagrams and composite structure diagrams.
Dynamic behavior view
Emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects.
Includes sequence diagrams, activity diagrams and state machine diagrams.
There are 13 different types of diagrams in UML.
Structure diagrams emphasize what things must be in the system being modeled:
• Class diagram
• Component diagram
• Composite structure diagram
• Deployment diagram
• Object diagram
• Package diagram
Behavior diagrams emphasize what must happen in the system being modeled:
• Activity diagram
• State Machine diagram
• Use case diagram
Interaction diagrams, a subset of behavior diagrams, emphasize the flow of control and data among the things in the system being modeled:
• Communication diagram
• Interaction overview diagram
• Sequence diagram
• UML Timing Diagram
UML is not restricted to modeling software. UML is also used for business process modeling, systems engineering modeling and representing organizational structures.
UML diagrams represent three different views of a system model:
Functional requirements view
Emphasizes the functional requirements of the system from the user's point of view.
Includes use case diagrams.
Static structural view
Emphasizes the static structure of the system using objects, attributes, operations, and relationships.
Includes class diagrams and composite structure diagrams.
Dynamic behavior view
Emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects.
Includes sequence diagrams, activity diagrams and state machine diagrams.
There are 13 different types of diagrams in UML.
Structure diagrams emphasize what things must be in the system being modeled:
• Class diagram
• Component diagram
• Composite structure diagram
• Deployment diagram
• Object diagram
• Package diagram
Behavior diagrams emphasize what must happen in the system being modeled:
• Activity diagram
• State Machine diagram
• Use case diagram
Interaction diagrams, a subset of behavior diagrams, emphasize the flow of control and data among the things in the system being modeled:
• Communication diagram
• Interaction overview diagram
• Sequence diagram
• UML Timing Diagram
UML is not restricted to modeling software. UML is also used for business process modeling, systems engineering modeling and representing organizational structures.
Delaying a job by less than a second:
How do we check if new records have been added to a physical file?
There are ways to wait for the latest record to be added, for instance, using end of file delay (OVRDBF with EOFDLY), but this is equivalent to having a delayed job should no record be found, and try to read it again. It is also possible to couple a data queue to a file and send a message to the data queue every time a record is added to the file. This in turn will "wake up" the batch job and make it read the file.
The easiest way to poll a file would be to reposition to the start and read a record every time the end of file is met. But this is not a good solution as jobs continually polling a file in this way will take far too much CPU and slow the system down. Delay must be introduced. The simplest way is to add a delay job (DLYJOB) every time an end of file condition is met. But DLYJOB is not perfect. The minimum time you can delay a job with it is one second. You can delay a job by only a number of seconds, not a fraction of a second.
One second is fine in most cases, but sometimes, you can't afford to wait for one second and you can't afford not to wait. This is where a C function comes in handy. "pthread_delay_np" delays a thread for a number of nanoseconds!
This API is written in C and therefore expects parameters in a specific format.
D timeSpec ds
D seconds 10i 0
D nanoseconds 10i 0
I declared the API as follows:
D delay pr 5i 0 extProc('pthread_delay_np')
d * value
The API expects a pointer to the timespec definition. It also returns a non-zero value if a problem occurred (which is unlikely if you are passing a valid timespec).
A specimen code:
c eval seconds = 0
c eval nanoseconds = 10000000
c eval return = delay(%addr(timeSpec))
c if return <> 0
c callP (e) system('DLYJOB 1')
c endIf
There are ways to wait for the latest record to be added, for instance, using end of file delay (OVRDBF with EOFDLY), but this is equivalent to having a delayed job should no record be found, and try to read it again. It is also possible to couple a data queue to a file and send a message to the data queue every time a record is added to the file. This in turn will "wake up" the batch job and make it read the file.
The easiest way to poll a file would be to reposition to the start and read a record every time the end of file is met. But this is not a good solution as jobs continually polling a file in this way will take far too much CPU and slow the system down. Delay must be introduced. The simplest way is to add a delay job (DLYJOB) every time an end of file condition is met. But DLYJOB is not perfect. The minimum time you can delay a job with it is one second. You can delay a job by only a number of seconds, not a fraction of a second.
One second is fine in most cases, but sometimes, you can't afford to wait for one second and you can't afford not to wait. This is where a C function comes in handy. "pthread_delay_np" delays a thread for a number of nanoseconds!
This API is written in C and therefore expects parameters in a specific format.
D timeSpec ds
D seconds 10i 0
D nanoseconds 10i 0
I declared the API as follows:
D delay pr 5i 0 extProc('pthread_delay_np')
d * value
The API expects a pointer to the timespec definition. It also returns a non-zero value if a problem occurred (which is unlikely if you are passing a valid timespec).
A specimen code:
c eval seconds = 0
c eval nanoseconds = 10000000
c eval return = delay(%addr(timeSpec))
c if return <> 0
c callP (e) system('DLYJOB 1')
c endIf
Sunday, December 23, 2007
Useful Functions in DDS:
Data description specifications (DDS) provide a powerful and convenient way to describe data attributes in file descriptions external to the application program that processes the data. People are always uncovering ways, however, for DDS do more than you thought was possible.
1. Resizing a field:
Have you ever been in the position where you wanted to rename a field or change the size of it for programming purposes? For example suppose you wanted to read a file that had packed fields sized 17,2 into a program that had a field size of 8,0. You can do this easily enough in your DDS file definition by building a logical file with the field a different size. The field will automatically be truncated and resized into the 8,0 field unless the field is too large fit. Then that will result in an error.
Physical File: TEST1
A R GLRXX
*
A TSTN01 17P 2 TEXT('PERIOD 1 AMOUNT')
A TSTN02 17P 2 TEXT('PERIOD 2 AMOUNT')
A TSTN03 17P 2 TEXT('PERIOD 3 AMOUNT')
A TSTN04 17P 2 TEXT('PERIOD 4 AMOUNT')
Logical File: TEST2
A R GLRXX PFILE(TEST1)
*
A TSTN01 8P 0
A TSTN02 8P 0
A TSTN03 8P 0
A TSTN04 8P 0
2. RENAME function:
If you want to rename the field for RPG or CLP programming purposes, just create the field in DDS with the new name and use the RENAME function on the old field. The field can then be resized at the same time. Using the same physical file TEST1.
Logical File: TEST3
A R GLRBX PFILE(TEST1)
*
A TSTN05 8P 0 RENAME(TSTN01)
A TSTN06 8P 0 RENAME(TSTN02)
A TSTN07 8P 0 RENAME(TSTN03)
A TSTN08 8P 0 RENAME(TSTN04)
3. Creating a key field for a joined logical using the SST function:
Another neat trick if you are building joined logical files is to use partial keys or a sub-stringed field for the join. Example: Say the secondary file has a field (CSTCTR) and you want to join it to your primary file but the key field to make the join execute doesn't exist in the primary file. The key portion is embedded within a field in the primary file (CTRACC). Use the SST function on the field containing the key data and extract what will be needed for the join (XXCC). The XXCC field is then used in the join to the secondary file CTRXRFP. The "I" field in the definition represents that it is used for input only.
R GLRXX JFILE(GLPCOM CTRXRFP)
J JOIN(1 2)
JFLD(XXCC CSTCTR)
WDCO RENAME(BXCO)
WDCOCT I SST(CTRACC 1 10)
WDEXP I SST(CTREXP 12 11)
WDACCT RENAME(CTRACC)
XXCC I SST(CTRACC 5 6)
4. Concatenating fields using the CONCAT function:
Another trick for building a field that doesn't exist in your logical file is to use the CONCAT function. Example: You want to create a field FSTLST (first and last name) from 2 fields FIRST and LAST. This can be done as follows:
FIRST R
LAST R
FSTLST CONCAT(FIRST LAST)
5. Using the RANGE function:
In your logical file you may want to select a range of records rather than using the select function to select individual records. Example: You want only the records in your logical file where the selected field is in the range 100 and 900. This can be done as follows:
S XXPG# RANGE('100' '900')
You can also use the RANGE function on multiple ranges.
6. Using the VALUES function:
In your logical file you may want to select specific records that have certain values by using the VALUES function. Example: You want only the records in your logical file where the selected field has the values 'O', 'P', and 'E'. This can be done as follows:
S RPTCTR VALUES('O ' 'P ' 'E ')
1. Resizing a field:
Have you ever been in the position where you wanted to rename a field or change the size of it for programming purposes? For example suppose you wanted to read a file that had packed fields sized 17,2 into a program that had a field size of 8,0. You can do this easily enough in your DDS file definition by building a logical file with the field a different size. The field will automatically be truncated and resized into the 8,0 field unless the field is too large fit. Then that will result in an error.
Physical File: TEST1
A R GLRXX
*
A TSTN01 17P 2 TEXT('PERIOD 1 AMOUNT')
A TSTN02 17P 2 TEXT('PERIOD 2 AMOUNT')
A TSTN03 17P 2 TEXT('PERIOD 3 AMOUNT')
A TSTN04 17P 2 TEXT('PERIOD 4 AMOUNT')
Logical File: TEST2
A R GLRXX PFILE(TEST1)
*
A TSTN01 8P 0
A TSTN02 8P 0
A TSTN03 8P 0
A TSTN04 8P 0
2. RENAME function:
If you want to rename the field for RPG or CLP programming purposes, just create the field in DDS with the new name and use the RENAME function on the old field. The field can then be resized at the same time. Using the same physical file TEST1.
Logical File: TEST3
A R GLRBX PFILE(TEST1)
*
A TSTN05 8P 0 RENAME(TSTN01)
A TSTN06 8P 0 RENAME(TSTN02)
A TSTN07 8P 0 RENAME(TSTN03)
A TSTN08 8P 0 RENAME(TSTN04)
3. Creating a key field for a joined logical using the SST function:
Another neat trick if you are building joined logical files is to use partial keys or a sub-stringed field for the join. Example: Say the secondary file has a field (CSTCTR) and you want to join it to your primary file but the key field to make the join execute doesn't exist in the primary file. The key portion is embedded within a field in the primary file (CTRACC). Use the SST function on the field containing the key data and extract what will be needed for the join (XXCC). The XXCC field is then used in the join to the secondary file CTRXRFP. The "I" field in the definition represents that it is used for input only.
R GLRXX JFILE(GLPCOM CTRXRFP)
J JOIN(1 2)
JFLD(XXCC CSTCTR)
WDCO RENAME(BXCO)
WDCOCT I SST(CTRACC 1 10)
WDEXP I SST(CTREXP 12 11)
WDACCT RENAME(CTRACC)
XXCC I SST(CTRACC 5 6)
4. Concatenating fields using the CONCAT function:
Another trick for building a field that doesn't exist in your logical file is to use the CONCAT function. Example: You want to create a field FSTLST (first and last name) from 2 fields FIRST and LAST. This can be done as follows:
FIRST R
LAST R
FSTLST CONCAT(FIRST LAST)
5. Using the RANGE function:
In your logical file you may want to select a range of records rather than using the select function to select individual records. Example: You want only the records in your logical file where the selected field is in the range 100 and 900. This can be done as follows:
S XXPG# RANGE('100' '900')
You can also use the RANGE function on multiple ranges.
6. Using the VALUES function:
In your logical file you may want to select specific records that have certain values by using the VALUES function. Example: You want only the records in your logical file where the selected field has the values 'O', 'P', and 'E'. This can be done as follows:
S RPTCTR VALUES('O ' 'P ' 'E ')
Thursday, December 20, 2007
Business Process Management:
Business Process Management (BPM) is an emerging field of knowledge and research at the intersection between management and information technology, encompassing methods, techniques and tools to design, enact, control, and analyze operational business processes involving humans, organizations, applications, documents and other sources of information.
BPM covers activities performed by organizations to manage and, if necessary, to improve their business processes. BPM systems monitor the execution of the business processes so that managers can analyze and change processes in response to data, rather than just a hunch. In short, Business Process Management is a management model that allows the organizations to manage their processes as any other assets and improve and manage them over the period of time.
The activities which constitute business process management can be grouped into five categories: Design, Modeling, Execution, Monitoring, and Optimization.
Process Design encompasses the following:
1. (optionally) The capture of existing processes and documenting their design in terms of Process Map / Flow, Actors, Alerts & Notifications, Escalations, Standard Operating Procedures, Service Level Agreements and task hand-over mechanisms
2. Design the "to-be" process covering all the above process and ensure that a correct and efficient design is theoretically prepared.
Process Modeling encompasses taking the process design and introducing different cost, resource, and other constraint scenarios to determine how the process will operate under different circumstances.
The traditional way to automate processes is to develop or purchase an application that executes the required steps of the process.
Monitoring encompasses the tracking of individual processes so that information on their state can be easily seen and the provision of statistics on the performance of one or more processes.
Process optimization includes retrieving process performance information from modeling or monitoring phase and identifying the potential or actual bottlenecks and potential rooms for cost savings or other improvements and then applying those enhancements in the design of the process thus continuing the value cycle of business process management.
In a medium to large organization scenario, a good business process management system allows business to accommodate day to day changes in business processes due to competitive, regulatory or market challenges in business processes without overly relying IT departments.
Wednesday, December 19, 2007
Testing Techniques:
1) Black-box Testing: Testing that verifies the item being tested when given the appropriate input provides the expected results.
2) Boundary-value testing: Testing of unusual or extreme situations that an item should be able to handle.
3) Class testing: The act of ensuring that a class and its instances (objects) perform as defined.
4) Class-integration testing: The act of ensuring that the classes, and their instances, form some software performs as defined.
5) Code review: A form of technical review in which the deliverable being reviewed is source code.
6) Component testing: The act of validating that a component works as defined.
7) Coverage testing: The act of ensuring that every line of code is exercised at least once.
8) Design review: A technical review in which a design model is inspected.
9) Inheritance-regression testing: The act of running the test cases of the super classes, both direct and indirect, on a given subclass.
10) Integration testing: Testing to verify several portions of software work together.
11) Method testing: Testing to verify a method (member function) performs as defined.
12) Model review: An inspection, ranging anywhere from a formal technical review to an informal walkthrough, by others who were not directly involved with the development of the model.
13) Path testing: The act of ensuring that all logic paths within your code are exercised at least once.
14) Prototype review: A process by which your users work through a collection of use cases, using a prototype as if it was the real system. The main goal is to test whether the design of the prototype meets their needs.
15) Prove it with code : The best way to determine if a model actually reflects what is needed, or what should be built, is to actually build software based on that model that show that the model works.
16) Regression testing: The acts of ensuring that previously tested behaviors still work as expected after changes have been made to an application.
17) Stress testing: The act of ensuring that the system performs as expected under high volumes of transactions, users, load, and so on.
18) Usage scenario testing: A testing technique in which one or more person(s) validate a model by acting through the logic of usage scenarios.
19) User interface testing: The testing of the user interface (UI) to ensure that it follows accepted UI standards and meets the requirements defined for it. Often referred to as graphical user interface (GUI) testing.
20) White-box testing: Testing to verify that specific lines of code work as defined. Also referred to as clear-box testing.
2) Boundary-value testing: Testing of unusual or extreme situations that an item should be able to handle.
3) Class testing: The act of ensuring that a class and its instances (objects) perform as defined.
4) Class-integration testing: The act of ensuring that the classes, and their instances, form some software performs as defined.
5) Code review: A form of technical review in which the deliverable being reviewed is source code.
6) Component testing: The act of validating that a component works as defined.
7) Coverage testing: The act of ensuring that every line of code is exercised at least once.
8) Design review: A technical review in which a design model is inspected.
9) Inheritance-regression testing: The act of running the test cases of the super classes, both direct and indirect, on a given subclass.
10) Integration testing: Testing to verify several portions of software work together.
11) Method testing: Testing to verify a method (member function) performs as defined.
12) Model review: An inspection, ranging anywhere from a formal technical review to an informal walkthrough, by others who were not directly involved with the development of the model.
13) Path testing: The act of ensuring that all logic paths within your code are exercised at least once.
14) Prototype review: A process by which your users work through a collection of use cases, using a prototype as if it was the real system. The main goal is to test whether the design of the prototype meets their needs.
15) Prove it with code : The best way to determine if a model actually reflects what is needed, or what should be built, is to actually build software based on that model that show that the model works.
16) Regression testing: The acts of ensuring that previously tested behaviors still work as expected after changes have been made to an application.
17) Stress testing: The act of ensuring that the system performs as expected under high volumes of transactions, users, load, and so on.
18) Usage scenario testing: A testing technique in which one or more person(s) validate a model by acting through the logic of usage scenarios.
19) User interface testing: The testing of the user interface (UI) to ensure that it follows accepted UI standards and meets the requirements defined for it. Often referred to as graphical user interface (GUI) testing.
20) White-box testing: Testing to verify that specific lines of code work as defined. Also referred to as clear-box testing.
Tuesday, December 18, 2007
Query for the sounded results:
The Soundex function returns a 4-character code representing the sound of the words in the argument. The result can be compared with the sound of other strings.
The argument can be any string, but not a BLOB.
The data type of the result is CHAR(4). If the argument can be null, the result can be null; if the argument is null, the result is the null value.
The Soundex function is useful for finding strings for which the sound is known but the precise spelling is not. It makes assumptions about the way that letters and combinations of letters sound that can help to search out words with similar sounds.
The comparison can be done directly or by passing the strings as arguments to the Difference function.
Example:
Run the following query:
SELECT eename
FROM ckempeff
WHERE SOUNDEX (eename)=SOUNDEX ('Plips')
Query Result:
PHILLIPS, KRISTINA M
PLUVIOSE, NORIANIE
PHILLIPS, EDWARD D
PHILLIP, SHANNON L
PHELPS, PATRICIA E
PHILLIPS, KORI A
POLIVKA, BETTY M
PHELPS, AMY R
PHILLIPS III, CHARLES R
The argument can be any string, but not a BLOB.
The data type of the result is CHAR(4). If the argument can be null, the result can be null; if the argument is null, the result is the null value.
The Soundex function is useful for finding strings for which the sound is known but the precise spelling is not. It makes assumptions about the way that letters and combinations of letters sound that can help to search out words with similar sounds.
The comparison can be done directly or by passing the strings as arguments to the Difference function.
Example:
Run the following query:
SELECT eename
FROM ckempeff
WHERE SOUNDEX (eename)=SOUNDEX ('Plips')
Query Result:
PHILLIPS, KRISTINA M
PLUVIOSE, NORIANIE
PHILLIPS, EDWARD D
PHILLIP, SHANNON L
PHELPS, PATRICIA E
PHILLIPS, KORI A
POLIVKA, BETTY M
PHELPS, AMY R
PHILLIPS III, CHARLES R
Three Valued Indicator:
Now a variable can be off, on, or neither off nor on. A three-valued indicator? Here's how it's done.
Declare a two-byte character variable.
D SomeVar s 2a inz(*off)
SomeVar is *ON if it holds two ones, *OFF if it contains two zeros, and is neither *ON nor *OFF if it has one one and one zero.
Now you can code expressions like these:
if SomeVar = *on;
DoWhatever();
endif;
if SomeVar = *off;
DoThis();
endif;
if SomeVar <> *on and SomeVar <> *off;
DoSomething();
else;
DoSomethingElse();
endif;
if SomeVar = *on or SomeVar = *off;
DoSomething();
else;
DoSomethingElse();
endif;
Don't those last two ifs look weird? Believe it or not, it's possible for the else branches to execute.
Declare a two-byte character variable.
D SomeVar s 2a inz(*off)
SomeVar is *ON if it holds two ones, *OFF if it contains two zeros, and is neither *ON nor *OFF if it has one one and one zero.
Now you can code expressions like these:
if SomeVar = *on;
DoWhatever();
endif;
if SomeVar = *off;
DoThis();
endif;
if SomeVar <> *on and SomeVar <> *off;
DoSomething();
else;
DoSomethingElse();
endif;
if SomeVar = *on or SomeVar = *off;
DoSomething();
else;
DoSomethingElse();
endif;
Don't those last two ifs look weird? Believe it or not, it's possible for the else branches to execute.
Structural and Functional Testing
Structural testing is considered white-box testing because knowledge of the internal logic of the system is used to develop test cases. Structural testing includes path testing, code coverage testing and analysis, logic testing, nested loop testing, and similar techniques. Unit testing, string or integration testing, load testing, stress testing, and performance testing are considered structural.
Functional testing addresses the overall behavior of the program by testing transaction flows, input validation, and functional completeness. Functional testing is considered black-box testing because no knowledge of the internal logic of the system is used to develop test cases. System testing, regression testing, and user acceptance testing are types of functional testing.
Both methods together validate the entire system. For example, a functional test case might be taken from the documentation description of how to perform a certain function, such as accepting bar code input.
A structural test case might be taken from a technical documentation manual. To effectively test systems, both methods are needed. Each method has its pros and cons, which are listed below:
Structural Testing
Advantages
The logic of the software’s structure can be tested.
Parts of the software will be tested which might have been forgotten if only functional testing was performed.
Disadvantages
Its tests do not ensure that user requirements have been met.
Its tests may not mimic real-world situations.
Functional Testing
Advantages
Simulates actual system usage.
Makes no system structure assumptions.
Disadvantages
Potential of missing logical errors in software.
Possibility of redundant testing.
Functional testing addresses the overall behavior of the program by testing transaction flows, input validation, and functional completeness. Functional testing is considered black-box testing because no knowledge of the internal logic of the system is used to develop test cases. System testing, regression testing, and user acceptance testing are types of functional testing.
Both methods together validate the entire system. For example, a functional test case might be taken from the documentation description of how to perform a certain function, such as accepting bar code input.
A structural test case might be taken from a technical documentation manual. To effectively test systems, both methods are needed. Each method has its pros and cons, which are listed below:
Structural Testing
Advantages
The logic of the software’s structure can be tested.
Parts of the software will be tested which might have been forgotten if only functional testing was performed.
Disadvantages
Its tests do not ensure that user requirements have been met.
Its tests may not mimic real-world situations.
Functional Testing
Advantages
Simulates actual system usage.
Makes no system structure assumptions.
Disadvantages
Potential of missing logical errors in software.
Possibility of redundant testing.
Thursday, December 13, 2007
Quick Reference in TAATOOL:
DSPRPGHLP:
The Display RPG Help tool provides help text and samples for 1) RPG III operation codes and 2) RPG IV operation codes (both fixed and free form), Built-in functions, and H/F/D keywords. DSPRPGHLP provides a command interface to the help text which is normally accessed using SEU.
Other DSPRPGHLP commands include STRRPGHLP and PRTRPGHLP.
Escape messages:
None. Escape messages from based on functions will be re-sent.
Required Parameters:
Keyword (KWD):
The keyword to enter. The default is *ALL and may be used with any of the RPGTYPE entries. Op codes and BIFs are considered keywords.
A Built-in Function may be entered for TYPE (*RPGLEBIF). Because the % sign is not valid when using the prompter with a value such as %ADDR, you must quote the value such as '%ADDR'.
RPG type (RPGTYPE):
*ALLRPG is the default, but may only be used when KWD (*ALL) is specified.
If the entry is other than *ALLRPG, the KWD value entered must be found in the appropriate group. For example, KWD (ADD) may be entered for an RPGTYPE of *RPG or RPGLEOP, but is not valid for *RPGLEF. If the keyword cannot be found, a special display appears and allows an entry of a correct value.
*RPGLEOP should be entered for RPGLE operation codes.
*RPGLEBIF should be entered for RPGLE Built-in functions.
*RPGLEH should be entered for H Spec keywords.
*RPGLEF should be entered for F Spec keywords.
*RPGLED should be entered for D Spec keywords.
Or just press enter after entering the command DSPRPGHLP.
The Display RPG Help tool provides help text and samples for 1) RPG III operation codes and 2) RPG IV operation codes (both fixed and free form), Built-in functions, and H/F/D keywords. DSPRPGHLP provides a command interface to the help text which is normally accessed using SEU.
Other DSPRPGHLP commands include STRRPGHLP and PRTRPGHLP.
Escape messages:
None. Escape messages from based on functions will be re-sent.
Required Parameters:
Keyword (KWD):
The keyword to enter. The default is *ALL and may be used with any of the RPGTYPE entries. Op codes and BIFs are considered keywords.
A Built-in Function may be entered for TYPE (*RPGLEBIF). Because the % sign is not valid when using the prompter with a value such as %ADDR, you must quote the value such as '%ADDR'.
RPG type (RPGTYPE):
*ALLRPG is the default, but may only be used when KWD (*ALL) is specified.
If the entry is other than *ALLRPG, the KWD value entered must be found in the appropriate group. For example, KWD (ADD) may be entered for an RPGTYPE of *RPG or RPGLEOP, but is not valid for *RPGLEF. If the keyword cannot be found, a special display appears and allows an entry of a correct value.
*RPGLEOP should be entered for RPGLE operation codes.
*RPGLEBIF should be entered for RPGLE Built-in functions.
*RPGLEH should be entered for H Spec keywords.
*RPGLEF should be entered for F Spec keywords.
*RPGLED should be entered for D Spec keywords.
Or just press enter after entering the command DSPRPGHLP.
Software Metrics:
Software metrics can be classified into three categories:
1. Product metrics,
2. Process metrics, and
3. Project metrics.
Product metrics describe the characteristics of the product such as size, complexity, design features, performance, and quality level.
Process metrics can be used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process.
Project metrics describe the project characteristics and execution. Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality metrics of a project are both process metrics and project metrics.
Software quality metrics are a subset of software metrics that focus on the quality aspects of the product, process, and project. In general, software quality metrics are more closely associated with process and product metrics than with project metrics. Nonetheless, the project parameters such as the number of developers and their skill levels, the schedule, the size, and the organization structure certainly affect the quality of the product. Software quality metrics can be divided further into end-product quality metrics and in-process quality metrics. Examples include:
Product quality metrics
• Mean time to failure
• Defect density
• Customer-reported problems
• Customer satisfaction
In-process quality metrics
• Phase-based defect removal pattern
• Defect removal effectiveness
• Defect density during formal machine testing
• Defect arrival pattern during formal machine testing
When development of a software product is complete and it is released to the market, it enters the maintenance phase of its life cycle. During this phase the defect arrivals by time interval and customer problem calls (which may or may not be defects) by time interval are the de facto metrics.
1. Product metrics,
2. Process metrics, and
3. Project metrics.
Product metrics describe the characteristics of the product such as size, complexity, design features, performance, and quality level.
Process metrics can be used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process.
Project metrics describe the project characteristics and execution. Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality metrics of a project are both process metrics and project metrics.
Software quality metrics are a subset of software metrics that focus on the quality aspects of the product, process, and project. In general, software quality metrics are more closely associated with process and product metrics than with project metrics. Nonetheless, the project parameters such as the number of developers and their skill levels, the schedule, the size, and the organization structure certainly affect the quality of the product. Software quality metrics can be divided further into end-product quality metrics and in-process quality metrics. Examples include:
Product quality metrics
• Mean time to failure
• Defect density
• Customer-reported problems
• Customer satisfaction
In-process quality metrics
• Phase-based defect removal pattern
• Defect removal effectiveness
• Defect density during formal machine testing
• Defect arrival pattern during formal machine testing
When development of a software product is complete and it is released to the market, it enters the maintenance phase of its life cycle. During this phase the defect arrivals by time interval and customer problem calls (which may or may not be defects) by time interval are the de facto metrics.
Wednesday, December 12, 2007
Reorganize Physical File Member (RGZPFM):
Records are not physically removed from an Iseries table when using the DELETE opcode. Records are marked as deleted in the tables and the Iseries operating system knows not to allow them to be viewed. Anyway these deleted records stay in your tables and can end of taking up a lot of space if not managed. The way this is typically managed is with the IBM command RGZPFM. This command calls a program that looks for all the records in a specific table that have been marked for deletion and removes them. It then resets the relative record numbers (RRN) of all records in the file and rebuilds all the logical's.
If a keyed file is identified in the Key file (KEYFILE) parameter, the system reorganizes the member by changing the physical sequence of the records in storage to either match the keyed sequence of the physical file member's access path, or to match the access path of a logical file member that is defined over the physical file.
When the member is reorganized and KEYFILE(*NONE) is not specified, the sequence in which the records are actually stored is changed, and any deleted records are removed from the file. If KEYFILE(*NONE) is specified or defaulted, the sequence of the records does not change, but deleted records are removed from the member. Optionally, new sequence numbers and zero date fields are placed in the source fields of the records. These fields are changed after the member has been compressed or reorganized.
For example, the following Reorganize Physical File Member (RGZPFM) command reorganizes the first member of a physical file using an access path from a logical file:
RGZPFM FILE(DSTPRODLB/ORDHDRP)
KEYFILE(DSTPRODLB/ORDFILL ORDFILL)
The physical file ORDHDRP has an arrival sequence access path. It was reorganized using the access path in the logical file ORDFILL. Assume the key field is the Order field. The following illustrates how the records were arranged.
The following is an example of the original ORDHDRP file. Note that record 3 was deleted before the RGZPFM command was run:
Relative Record Number Cust Order Ordate
1 41394 41882 072480
2 28674 32133 060280
3 deleted record
4 56325 38694 062780
The following example shows the ORDHDRP file reorganized using the Order field as the key field in ascending sequence:
Relative Record Number Cust Order Ordate
1 28674 32133 060280
2 56325 38694 062780
3 41394 41882 072480
If a keyed file is identified in the Key file (KEYFILE) parameter, the system reorganizes the member by changing the physical sequence of the records in storage to either match the keyed sequence of the physical file member's access path, or to match the access path of a logical file member that is defined over the physical file.
When the member is reorganized and KEYFILE(*NONE) is not specified, the sequence in which the records are actually stored is changed, and any deleted records are removed from the file. If KEYFILE(*NONE) is specified or defaulted, the sequence of the records does not change, but deleted records are removed from the member. Optionally, new sequence numbers and zero date fields are placed in the source fields of the records. These fields are changed after the member has been compressed or reorganized.
For example, the following Reorganize Physical File Member (RGZPFM) command reorganizes the first member of a physical file using an access path from a logical file:
RGZPFM FILE(DSTPRODLB/ORDHDRP)
KEYFILE(DSTPRODLB/ORDFILL ORDFILL)
The physical file ORDHDRP has an arrival sequence access path. It was reorganized using the access path in the logical file ORDFILL. Assume the key field is the Order field. The following illustrates how the records were arranged.
The following is an example of the original ORDHDRP file. Note that record 3 was deleted before the RGZPFM command was run:
Relative Record Number Cust Order Ordate
1 41394 41882 072480
2 28674 32133 060280
3 deleted record
4 56325 38694 062780
The following example shows the ORDHDRP file reorganized using the Order field as the key field in ascending sequence:
Relative Record Number Cust Order Ordate
1 28674 32133 060280
2 56325 38694 062780
3 41394 41882 072480
Tuesday, December 11, 2007
On Vs Where:
Here is an invoicing data that we can use for our understanding. We have header information:
SELECT H.* FROM INVHDR AS H
Invoice Company Customer Date
47566 1 44 2004-05-03
47567 2 5 2004-05-03
47568 1 10001 2004-05-03
47569 7 777 2004-05-03
47570 7 777 2004-05-04
47571 2 5 2004-05-04
And we have related details:
SELECT D.* FROM INVDTL AS D
Invoice Line Item Price Quantity
47566 1 AB1441 25.00 3
47566 2 JJ9999 20.00 4
47567 1 DN0120 .35 800
47569 1 DC2984 12.50 2
47570 1 MI8830 .10 10
47570 2 AB1441 24.00 100
47571 1 AJ7644 15.00 1
Notice that the following query contains a selection expression in the WHERE clause:
SELECT H.INVOICE, H.COMPANY, H.CUSTNBR, H.INVDATE,
D.LINE, D.ITEM, D.QTY
FROM INVHDR AS H
LEFT JOIN INVDTL AS D
ON H.INVOICE = D.INVOICE
WHERE H.COMPANY = 1
Invoice Company Customer Date Line Item Quantity
47566 1 44 2004-05-03 1 AB1441 3
47566 1 44 2004-05-03 2 JJ9999 4
47568 1 10001 2004-05-03 - - -
The result set includes data for company one invoices only. If we move the selection expression to the ON clause:
SELECT H.INVOICE, H.COMPANY, H.CUSTNBR, H.INVDATE,
D.LINE, D.ITEM, D.QTY
FROM INVHDR AS H
LEFT JOIN INVDTL AS D
ON H.INVOICE = D.INVOICE
AND H.COMPANY = 1
Invoice Company Customer Date Line Item Quantity
47566 1 44 2004-05-03 1 AB1441 3
47566 1 44 2004-05-03 2 JJ9999 4
47567 2 5 2004-05-03 - - -
47568 1 10001 2004-05-03 - - -
47569 7 777 2004-05-03 - - -
47570 7 777 2004-05-04 - - -
47571 2 5 2004-05-04 - - -
This query differs from the previous one in that all invoice headers are in the resulting table, not just those for company number one. Notice that details are null for other companies, even though some of those invoices have corresponding rows in the details file. What’s going on?
Here’s the difference. When a selection expression is placed in the WHERE clause, the resulting table is created. Then the filter is applied to select the rows that are to be returned in the result set. When a selection expression is placed in the ON clause of an outer join, the selection expression limits the rows that will take part in the join, but for a primary table, the selection expression does not limit the rows that will be placed in the result set. ON restricts the rows that are allowed to participate in the join. In this case, all header rows are placed in the result set, but only company one header rows are allowed to join to the details.
SELECT H.* FROM INVHDR AS H
Invoice Company Customer Date
47566 1 44 2004-05-03
47567 2 5 2004-05-03
47568 1 10001 2004-05-03
47569 7 777 2004-05-03
47570 7 777 2004-05-04
47571 2 5 2004-05-04
And we have related details:
SELECT D.* FROM INVDTL AS D
Invoice Line Item Price Quantity
47566 1 AB1441 25.00 3
47566 2 JJ9999 20.00 4
47567 1 DN0120 .35 800
47569 1 DC2984 12.50 2
47570 1 MI8830 .10 10
47570 2 AB1441 24.00 100
47571 1 AJ7644 15.00 1
Notice that the following query contains a selection expression in the WHERE clause:
SELECT H.INVOICE, H.COMPANY, H.CUSTNBR, H.INVDATE,
D.LINE, D.ITEM, D.QTY
FROM INVHDR AS H
LEFT JOIN INVDTL AS D
ON H.INVOICE = D.INVOICE
WHERE H.COMPANY = 1
Invoice Company Customer Date Line Item Quantity
47566 1 44 2004-05-03 1 AB1441 3
47566 1 44 2004-05-03 2 JJ9999 4
47568 1 10001 2004-05-03 - - -
The result set includes data for company one invoices only. If we move the selection expression to the ON clause:
SELECT H.INVOICE, H.COMPANY, H.CUSTNBR, H.INVDATE,
D.LINE, D.ITEM, D.QTY
FROM INVHDR AS H
LEFT JOIN INVDTL AS D
ON H.INVOICE = D.INVOICE
AND H.COMPANY = 1
Invoice Company Customer Date Line Item Quantity
47566 1 44 2004-05-03 1 AB1441 3
47566 1 44 2004-05-03 2 JJ9999 4
47567 2 5 2004-05-03 - - -
47568 1 10001 2004-05-03 - - -
47569 7 777 2004-05-03 - - -
47570 7 777 2004-05-04 - - -
47571 2 5 2004-05-04 - - -
This query differs from the previous one in that all invoice headers are in the resulting table, not just those for company number one. Notice that details are null for other companies, even though some of those invoices have corresponding rows in the details file. What’s going on?
Here’s the difference. When a selection expression is placed in the WHERE clause, the resulting table is created. Then the filter is applied to select the rows that are to be returned in the result set. When a selection expression is placed in the ON clause of an outer join, the selection expression limits the rows that will take part in the join, but for a primary table, the selection expression does not limit the rows that will be placed in the result set. ON restricts the rows that are allowed to participate in the join. In this case, all header rows are placed in the result set, but only company one header rows are allowed to join to the details.
Monday, December 10, 2007
Pair Programming:
Pair programming is one of the most contentious practices of extreme programming (XP). The basic concept of pair programming, or "pairing," is two developers actively working together to build code. In XP, the rule is that you must produce all production code by virtue of pairing. The chief benefit touted by pairing proponents is improved code quality. Two heads are better than one. Note that pairing is a practice that you can use exclusively of XP. However it may require a cultural change in traditional software shops. Paying attention to explain the benefits and giving some guidance will help:
General benefits:
• Produces better code coverage. By switching pairs, developers understand more of the system.
• Minimizes dependencies upon personnel.
• Results in a more evenly paced, sustainable development rhythm.
• Can produce solutions more rapidly.
• Moves all team members to a higher level of skills and system understanding.
• Helps build a true team.
Specific benefits from a management standpoint:
• Reduces risk
• Shorter learning curve for new hires
• Can be used as interviewing criteria ("can we work with this guy?")
• Problems are far less hidden
• Helps ensure adherence to standards
• Cross-pollination/resource fluidity.
Specific benefits from an employee perspective:
• Awareness of other parts of the system
• Resume building
• Decreases time spent in review meetings
• Continuous education. Learn new things every day from even the most junior programmers.
• Provides the ability to move between teams.
• More rapid learning as a new hire.
Rules:
1. All production code must be developed by a pair.
2. It’s not one person doing all the work and another watching.
3. Switch keyboards several times an hour. The person without the keyboard should be thinking about the bigger picture and should be providing strategic direction.
4. Don’t pair more than 75% of your work day. Make sure you take breaks! Get up and walk around for a few minutes at least once an hour.
5. Switch pairs frequently, at least once a day.
General benefits:
• Produces better code coverage. By switching pairs, developers understand more of the system.
• Minimizes dependencies upon personnel.
• Results in a more evenly paced, sustainable development rhythm.
• Can produce solutions more rapidly.
• Moves all team members to a higher level of skills and system understanding.
• Helps build a true team.
Specific benefits from a management standpoint:
• Reduces risk
• Shorter learning curve for new hires
• Can be used as interviewing criteria ("can we work with this guy?")
• Problems are far less hidden
• Helps ensure adherence to standards
• Cross-pollination/resource fluidity.
Specific benefits from an employee perspective:
• Awareness of other parts of the system
• Resume building
• Decreases time spent in review meetings
• Continuous education. Learn new things every day from even the most junior programmers.
• Provides the ability to move between teams.
• More rapid learning as a new hire.
Rules:
1. All production code must be developed by a pair.
2. It’s not one person doing all the work and another watching.
3. Switch keyboards several times an hour. The person without the keyboard should be thinking about the bigger picture and should be providing strategic direction.
4. Don’t pair more than 75% of your work day. Make sure you take breaks! Get up and walk around for a few minutes at least once an hour.
5. Switch pairs frequently, at least once a day.
Writing Free form SQL Statements:
If you have the SQL Development Kit, you may very well be happy to know that V5R4 allows you to place your SQL commands in free-format calcs.
Begin the statement with EXEC SQL. Be sure both words are on the same line. Code the statement in free-format syntax across as many lines as you like, and end with a semicolon.
/free
exec sql
update SomeFile
set SomeField = :SomeValue
where AnotherField = :AnotherValue;
/end-free
Begin the statement with EXEC SQL. Be sure both words are on the same line. Code the statement in free-format syntax across as many lines as you like, and end with a semicolon.
/free
exec sql
update SomeFile
set SomeField = :SomeValue
where AnotherField = :AnotherValue;
/end-free
Friday, December 7, 2007
Enabling a "workstation time-out" feature in RPG:
There are five things required to provide a time-out option on any interactive workstation file. This capability allows an RPG program to receive control before an end-user presses a Function key or Enter. Some of the uses for this kind of function include:
Providing a marquee for a schedule via a subfile
Update the time displayed on the workstation at regular intervals
Refresh the information, such as a news display, periodically
As mentioned, there are five things required to achieve workstation time-out. Those five things are:
1. Add the INVITE keyword to the Workstation display file DDS. This is a file-level keyword.
2. Use the WAITRCD parameter of CRTDSPF to set the desired time-out period.
3. Add the MAXDEV(*FILE) keyword to the File specification for the Workstation device file.
4. Write the desired display file formats to the display using the normal methods.
5. Use the READ operation code to read Display File name, not the Record format name.
You must avoid using EXFMT to the display file as this operation code does not support workstation time-out.
FMarquee CF E WORKSTN MAXDEV(*FILE)
F SFILE(detail:rrn)
C Write Header
C Write Footer
C Do 12 rrn
C Write Detail
C enddo
C Write SFLCTLFMT
C Read Marquee
Providing a marquee for a schedule via a subfile
Update the time displayed on the workstation at regular intervals
Refresh the information, such as a news display, periodically
As mentioned, there are five things required to achieve workstation time-out. Those five things are:
1. Add the INVITE keyword to the Workstation display file DDS. This is a file-level keyword.
2. Use the WAITRCD parameter of CRTDSPF to set the desired time-out period.
3. Add the MAXDEV(*FILE) keyword to the File specification for the Workstation device file.
4. Write the desired display file formats to the display using the normal methods.
5. Use the READ operation code to read Display File name, not the Record format name.
You must avoid using EXFMT to the display file as this operation code does not support workstation time-out.
FMarquee CF E WORKSTN MAXDEV(*FILE)
F SFILE(detail:rrn)
C Write Header
C Write Footer
C Do 12 rrn
C Write Detail
C enddo
C Write SFLCTLFMT
C Read Marquee
Quick Reference in TAATOOL:
DSPRPGHLP:
The Display RPG Help tool provides help text and samples for 1) RPG III operation codes and 2) RPG IV operation codes (both fixed and free form), Built-in functions, and H/F/D keywords. DSPRPGHLP provides a command interface to the help text which is normally accessed using SEU.
Other DSPRPGHLP commands include STRRPGHLP and PRTRPGHLP.
Escape messages:
None. Escape messages from based on functions will be re-sent.
Required Parameters:
Keyword (KWD):
The keyword to enter. The default is *ALL and may be used with any of the RPGTYPE entries. Op codes and BIFs are considered keywords.
A Built-in Function may be entered for TYPE (*RPGLEBIF). Because the % sign is not valid when using the prompter with a value such as %ADDR, you must quote the value such as '%ADDR'.
RPG type (RPGTYPE):
*ALLRPG is the default, but may only be used when KWD (*ALL) is specified.
If the entry is other than *ALLRPG, the KWD value entered must be found in the appropriate group. For example, KWD (ADD) may be entered for an RPGTYPE of *RPG or RPGLEOP, but is not valid for *RPGLEF. If the keyword cannot be found, a special display appears and allows an entry of a correct value.
*RPGLEOP should be entered for RPGLE operation codes.
*RPGLEBIF should be entered for RPGLE Built-in functions.
*RPGLEH should be entered for H Spec keywords.
*RPGLEF should be entered for F Spec keywords.
*RPGLED should be entered for D Spec keywords.
Or just press enter after entering the command DSPRPGHLP.
The Display RPG Help tool provides help text and samples for 1) RPG III operation codes and 2) RPG IV operation codes (both fixed and free form), Built-in functions, and H/F/D keywords. DSPRPGHLP provides a command interface to the help text which is normally accessed using SEU.
Other DSPRPGHLP commands include STRRPGHLP and PRTRPGHLP.
Escape messages:
None. Escape messages from based on functions will be re-sent.
Required Parameters:
Keyword (KWD):
The keyword to enter. The default is *ALL and may be used with any of the RPGTYPE entries. Op codes and BIFs are considered keywords.
A Built-in Function may be entered for TYPE (*RPGLEBIF). Because the % sign is not valid when using the prompter with a value such as %ADDR, you must quote the value such as '%ADDR'.
RPG type (RPGTYPE):
*ALLRPG is the default, but may only be used when KWD (*ALL) is specified.
If the entry is other than *ALLRPG, the KWD value entered must be found in the appropriate group. For example, KWD (ADD) may be entered for an RPGTYPE of *RPG or RPGLEOP, but is not valid for *RPGLEF. If the keyword cannot be found, a special display appears and allows an entry of a correct value.
*RPGLEOP should be entered for RPGLE operation codes.
*RPGLEBIF should be entered for RPGLE Built-in functions.
*RPGLEH should be entered for H Spec keywords.
*RPGLEF should be entered for F Spec keywords.
*RPGLED should be entered for D Spec keywords.
Or just press enter after entering the command DSPRPGHLP.
Tuesday, December 4, 2007
Gantt Charts:
A Gantt chart is a popular type of bar chart that illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of a project. Terminal elements and summary elements comprise the work breakdown structure of the project. Some Gantt charts also show the dependency (i.e., precedence network) relationships between activities. Gantt charts can be used to show current schedule status using percent-complete shadings and a vertical "TODAY" line as shown here.
Gantt charts may be simple versions created on graph paper or more complex automated versions created using project management applications such as Microsoft Project or Excel.
Excel does not contain a built-in Gantt chart format; however, you can create a Gantt chart in Excel by customizing the stacked bar chart type.
Advantages:
• Gantt charts have become a common technique for representing the phases and activities of a project work breakdown structure (WBS), so they can be understood by a wide audience.
• A Gantt chart allows you to assess how long a project should take.
• A Gantt chart lays out the order in which tasks need to be carried out.
• A Gantt chart helps manage the dependencies between tasks.
• A Gantt chart allows you to see immediately what should have been achieved at a point in time.
• A Gantt chart allows you to see how remedial action may bring the project back on course.
Monday, December 3, 2007
TAATOOL Commands for copying Data Queues:
To get the data from one data queue and to get it populated to another data queue, there are some TAATOOL commands available that helps in achieving it. The following steps needs to be followed.
1. CVTDTAQ command copies the data queue data to the specified file. The Convert Data Queue command converts the entries from a keyed or non-keyed TYPE (*STD) data queue to an outfile named DTAQP. One record is written for each entry. The size of the entry field in the outfile is limited to 9,000 bytes. Data is truncated if it exceeds this amount.
The model file is TAADTQMP with a format name of DTAQR.
Run CVTDTAQ command by specifying the data queue name from which the data needs to be copied. Specify a file name to which the data needs to be populated.
2. CPYBCKDTAQ command copies the data from the file to the specified data queue. The Copy Back Data Queue command is intended for refreshing a data queue or duplicating the entries to a different data queue. You must first convert the entries in the data queue to the DTAQP file with the TAA CVTDTAQ command.
CPYBCKDTAQ then reads the data from the DTAQP file and uses the QSNDDTAQ API to send the entries to a named data queue. Both keyed and non-keyed data queues are supported.
Run CPYBCKDTAQ command by specifying the file that contains data and the data queue name to which the data needs to be populated.
1. CVTDTAQ command copies the data queue data to the specified file. The Convert Data Queue command converts the entries from a keyed or non-keyed TYPE (*STD) data queue to an outfile named DTAQP. One record is written for each entry. The size of the entry field in the outfile is limited to 9,000 bytes. Data is truncated if it exceeds this amount.
The model file is TAADTQMP with a format name of DTAQR.
Run CVTDTAQ command by specifying the data queue name from which the data needs to be copied. Specify a file name to which the data needs to be populated.
2. CPYBCKDTAQ command copies the data from the file to the specified data queue. The Copy Back Data Queue command is intended for refreshing a data queue or duplicating the entries to a different data queue. You must first convert the entries in the data queue to the DTAQP file with the TAA CVTDTAQ command.
CPYBCKDTAQ then reads the data from the DTAQP file and uses the QSNDDTAQ API to send the entries to a named data queue. Both keyed and non-keyed data queues are supported.
Run CPYBCKDTAQ command by specifying the file that contains data and the data queue name to which the data needs to be populated.
Sunday, December 2, 2007
Project Development Stages:
The project development process has the major stages: initiation, development, production or execution, and closing/maintenance.
Initiation
The initiation stage determines the nature and scope of the development. If this stage is not performed well, it is unlikely that the project will be successful in meeting the business’s needs. The key project controls needed here are an understanding of the business environment and making sure that all necessary controls are incorporated into the project. Any deficiencies should be reported and a recommendation should be made to fix them.
The initiation stage should include a cohesive plan that encompasses the following areas:
• Study analyzing the business needs in measurable goals.
• Review of the current operations.
• Conceptual design of the operation of the final product.
• Equipment requirement.
• Financial analysis of the costs and benefits including a budget.
• Select stake holders, including users, and support personnel for the project.
• Project charter including costs, tasks, deliverables, and schedule.
Planning and design
After the initiation stage, the system is designed. Occasionally, a small prototype of the final product is built and tested. Testing is generally performed by a combination of testers and end users, and can occur after the prototype is built or concurrently. Controls should be in place that ensures that the final product will meet the specifications of the project charter. The results of the design stage should include a product design that:
• Satisfies the project sponsor, end user, and business requirements.
• Functions as it was intended.
• Can be produced within quality standards.
• Can be produced within time and budget constraints.
Closing and maintenance
Closing includes the formal acceptance of the project and the ending thereof. Administrative activities include the archiving of the files and documenting lessons learned.
Maintenance is an ongoing process, and it includes:
• Continuing support of end users
• Correction of errors
• Updates of the software over time
In this stage, auditors should pay attention to how effectively and quickly user problems are resolved.
Initiation
The initiation stage determines the nature and scope of the development. If this stage is not performed well, it is unlikely that the project will be successful in meeting the business’s needs. The key project controls needed here are an understanding of the business environment and making sure that all necessary controls are incorporated into the project. Any deficiencies should be reported and a recommendation should be made to fix them.
The initiation stage should include a cohesive plan that encompasses the following areas:
• Study analyzing the business needs in measurable goals.
• Review of the current operations.
• Conceptual design of the operation of the final product.
• Equipment requirement.
• Financial analysis of the costs and benefits including a budget.
• Select stake holders, including users, and support personnel for the project.
• Project charter including costs, tasks, deliverables, and schedule.
Planning and design
After the initiation stage, the system is designed. Occasionally, a small prototype of the final product is built and tested. Testing is generally performed by a combination of testers and end users, and can occur after the prototype is built or concurrently. Controls should be in place that ensures that the final product will meet the specifications of the project charter. The results of the design stage should include a product design that:
• Satisfies the project sponsor, end user, and business requirements.
• Functions as it was intended.
• Can be produced within quality standards.
• Can be produced within time and budget constraints.
Closing and maintenance
Closing includes the formal acceptance of the project and the ending thereof. Administrative activities include the archiving of the files and documenting lessons learned.
Maintenance is an ongoing process, and it includes:
• Continuing support of end users
• Correction of errors
• Updates of the software over time
In this stage, auditors should pay attention to how effectively and quickly user problems are resolved.
Thursday, November 29, 2007
Smoke Testing:
Smoke testing is a term used in plumbing, woodwind repair, electronics, and computer software development. It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright, the assembly is ready for more stressful testing.
In computer programming and software testing, smoke testing is a preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release.
Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing.
In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger product’s official source code collection. Next after code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software; some even believe that it is the most effective of all.
In software testing, a smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a build verification test. This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things?". There is no need to get down to field validation or business flows. If you get a "No" answer to basic questions like these, then the application is so badly broken, there's effectively nothing there to allow further testing. These written tests can either be performed manually or using an automated tool. When automated tools are used, the tests are often initiated by the same process that generates the build itself.
This is sometimes referred to as 'rattle' testing - as in 'if I shake it does it rattle?'.
In computer programming and software testing, smoke testing is a preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release.
Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing.
In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger product’s official source code collection. Next after code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software; some even believe that it is the most effective of all.
In software testing, a smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a build verification test. This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things?". There is no need to get down to field validation or business flows. If you get a "No" answer to basic questions like these, then the application is so badly broken, there's effectively nothing there to allow further testing. These written tests can either be performed manually or using an automated tool. When automated tools are used, the tests are often initiated by the same process that generates the build itself.
This is sometimes referred to as 'rattle' testing - as in 'if I shake it does it rattle?'.
Wednesday, November 28, 2007
Add some colors to your source:
Here's a tip many people may not be aware of. When we are coding the programs we put a lot of comments to explain logic or functionality, instead of putting these comments in green color, generally we will prefer to put them in different colors. We can add some colors to our program by using simple Client Access keyboard mapping.
o Click on the "MAP" button.
o Click on any key, for example I want to map my ALT + R for red color.
o So click on R and type APL 28 in the box provided against Alt and save the settings.
o Now go to the source line, which you want to make in RED. Put your cursor just before the first word of the line and press ALT + R and enter, the source line will be displayed red in color.
By using this technique we can change the colors for member text too. Take option 13 and F4 for member you want to change the color first, then put your cursor just before the first letter in the text and press corresponding color key and press enter. Now you can see your member text will be displayed in different color.
For different colors follow this table:
APL 20 - Green
APL 21 - Green RI
APL 22 - White
APL 23 - White RI
APL 24 - Green UL
APL 25 - Green RI UL
APL 26 - White UL
APL 27 - ND
APL 28 - Red
APL 29 - Red RI
APL 30 - Turquoise
APL 31 - Turquoise RI
APL 32 - Yellow
APL 33 - Yellow RI
APL 34 - Turquoise UL
APL 35 - Turquoise UL RI
APL 36 - Yellow UL
APL 38 – Pink
APL 39 – Pink RI
APL 3a – Blue
APL 3b - Blue RI
APL 3c – Pink UL
APL 3d – Pink UL RI
APL 3e – Blue UL
APL 2a – Red Blinking
APL 2b – Red RI Blinking
APL 2c – Red UL
APL 2d – Red UL RI
APL 2e – Red UL Blinking
o Click on the "MAP" button.
o Click on any key, for example I want to map my ALT + R for red color.
o So click on R and type APL 28 in the box provided against Alt and save the settings.
o Now go to the source line, which you want to make in RED. Put your cursor just before the first word of the line and press ALT + R and enter, the source line will be displayed red in color.
By using this technique we can change the colors for member text too. Take option 13 and F4 for member you want to change the color first, then put your cursor just before the first letter in the text and press corresponding color key and press enter. Now you can see your member text will be displayed in different color.
For different colors follow this table:
APL 20 - Green
APL 21 - Green RI
APL 22 - White
APL 23 - White RI
APL 24 - Green UL
APL 25 - Green RI UL
APL 26 - White UL
APL 27 - ND
APL 28 - Red
APL 29 - Red RI
APL 30 - Turquoise
APL 31 - Turquoise RI
APL 32 - Yellow
APL 33 - Yellow RI
APL 34 - Turquoise UL
APL 35 - Turquoise UL RI
APL 36 - Yellow UL
APL 38 – Pink
APL 39 – Pink RI
APL 3a – Blue
APL 3b - Blue RI
APL 3c – Pink UL
APL 3d – Pink UL RI
APL 3e – Blue UL
APL 2a – Red Blinking
APL 2b – Red RI Blinking
APL 2c – Red UL
APL 2d – Red UL RI
APL 2e – Red UL Blinking
Tuesday, November 27, 2007
Useful TAATOOL Commands:
Some useful TATOOL commands:
DSPWINDOW:
The Display Window command displays a window over the current display. The intent of the command is to provide for a better informational display in exception conditions. Ten lines of text may be presented plus an error line. The command ends normally if the user presses Enter, F3, or F12. A single F key (such as F4 - F24) may be defined with user supplied text.
By default, the window will appear on the right side of the current display beginning in position 43 and extending to position 79. It starts on line 2 and ends on line 20.
Escape messages:
TAA9891 F key was pressed (based on FKEY parameter)
Restrictions
The display currently in use when DSPWINDOW is run must be specified as RSTDSP (*YES).
Example:
DSPWINDOW TITLE('Error Occurred')
LINE1('The record you want to delete')
LINE2(' no longer exists.')
LINE4('It has been deleted by another')
LINE5(' user.')
LINE7('Press Enter to continue to the') +
LINE8(' next record.')
PMTOPR:
The PMTOPR command places a prompt on the users display and allows validity checking of the response. This is useful in some environments where a specific input value is needed before proceeding into some further processing (e.g. some System Operator functions). The prompt is always made to the requester (the command must occur from an interactive program).
The advantage of this command is:
• PMTOPR can simplify the amount of code needed to validate the entry of a parameter. The coding required to properly use SNDUSRMSG includes checking for a blank entry, additional processing for simple validations and looping back on an error etc. is not simple.
• PMTOPR can provide a standard means of communicating with the operator when a single value is needed.
For example, the programmer could specify in a CL Program:
DCL &RTNVAR *CHAR LEN(16)
.
.
PMTOPR RTNVAR(&RTNVAR) +
LEN(10) +
PROMPT('Enter the file name to be processed by the BILLING function') +
TYPE(*NAME)
OVRDBF INPUT TOFILE(&RTNVAR)
CALL ....
The PMTOPR command would ensure that an entry was made, that it was a valid name (e.g. Did not start with a digit) and that it did not exceed 10 characters. If an invalid entry is made, an appropriate error message is displayed. The operator must enter a valid name (i.e. A command key or a blank value will be rejected). Note that the operator must respond. The default is that F3=Exit is not allowed. It would also be possible to enter a list of values that the operator can enter and/or a default. For example, the command could have been entered as:
PMTOPR RTNVAR(&RTNVAR) +
LEN(10) +
PROMPT('Enter the file name to be processed +
by the BILLING function') +
TYPE(*NAME) +
DFT(NEWORDER) +
VALUES(NEWORDER OLDORDER PREBILL BACKORDER)
DSPWINDOW:
The Display Window command displays a window over the current display. The intent of the command is to provide for a better informational display in exception conditions. Ten lines of text may be presented plus an error line. The command ends normally if the user presses Enter, F3, or F12. A single F key (such as F4 - F24) may be defined with user supplied text.
By default, the window will appear on the right side of the current display beginning in position 43 and extending to position 79. It starts on line 2 and ends on line 20.
Escape messages:
TAA9891 F key was pressed (based on FKEY parameter)
Restrictions
The display currently in use when DSPWINDOW is run must be specified as RSTDSP (*YES).
Example:
DSPWINDOW TITLE('Error Occurred')
LINE1('The record you want to delete')
LINE2(' no longer exists.')
LINE4('It has been deleted by another')
LINE5(' user.')
LINE7('Press Enter to continue to the') +
LINE8(' next record.')
PMTOPR:
The PMTOPR command places a prompt on the users display and allows validity checking of the response. This is useful in some environments where a specific input value is needed before proceeding into some further processing (e.g. some System Operator functions). The prompt is always made to the requester (the command must occur from an interactive program).
The advantage of this command is:
• PMTOPR can simplify the amount of code needed to validate the entry of a parameter. The coding required to properly use SNDUSRMSG includes checking for a blank entry, additional processing for simple validations and looping back on an error etc. is not simple.
• PMTOPR can provide a standard means of communicating with the operator when a single value is needed.
For example, the programmer could specify in a CL Program:
DCL &RTNVAR *CHAR LEN(16)
.
.
PMTOPR RTNVAR(&RTNVAR) +
LEN(10) +
PROMPT('Enter the file name to be processed by the BILLING function') +
TYPE(*NAME)
OVRDBF INPUT TOFILE(&RTNVAR)
CALL ....
The PMTOPR command would ensure that an entry was made, that it was a valid name (e.g. Did not start with a digit) and that it did not exceed 10 characters. If an invalid entry is made, an appropriate error message is displayed. The operator must enter a valid name (i.e. A command key or a blank value will be rejected). Note that the operator must respond. The default is that F3=Exit is not allowed. It would also be possible to enter a list of values that the operator can enter and/or a default. For example, the command could have been entered as:
PMTOPR RTNVAR(&RTNVAR) +
LEN(10) +
PROMPT('Enter the file name to be processed +
by the BILLING function') +
TYPE(*NAME) +
DFT(NEWORDER) +
VALUES(NEWORDER OLDORDER PREBILL BACKORDER)
Monday, November 26, 2007
Handy Keyboard Shortcuts:
Everyone knows that Alt+Ctrl+Del interrupt the operating system, but most people don't know that many of the handy commands.
• Alt+F4 closes the current window.
• Ctrl+Esc will pop up the Start menu,
• Alt+Esc will bring the next window to the foreground,
• Alt+Tab or Alt+Shift+Tab will let you cycle through all available windows and jump to the one you select.
On keyboards that have the little "Windows" key down near the space bar, you probably know that you can press that key to open the Start menu. You can also use that key with other keys like you use the shift key. For example:
• Windows Logo: Start menu
• Windows Logo+R: Run dialog box
• Windows Logo+M: Minimize all
• SHIFT+Windows Logo+M: Undo minimize all
• Windows Logo+F1: Help
• Windows Logo+E: Windows Explorer
• Windows Logo+F: Find files or folders
• Windows Logo+D: Minimizes all open windows and displays the desktop
• CTRL+Windows Logo+F: Find computer
• CTRL+Windows Logo+TAB: Moves focus from Start, to the Quick Launch toolbar, to the system tray (use RIGHT ARROW or LEFT ARROW to move focus to items on the Quick Launch toolbar and the system tray)
• Windows Logo+TAB: Cycle through taskbar buttons
• Windows Logo+Break: System Properties dialog box
• Application key: Displays a shortcut menu for the selected item
• Alt+F4 closes the current window.
• Ctrl+Esc will pop up the Start menu,
• Alt+Esc will bring the next window to the foreground,
• Alt+Tab or Alt+Shift+Tab will let you cycle through all available windows and jump to the one you select.
On keyboards that have the little "Windows" key down near the space bar, you probably know that you can press that key to open the Start menu. You can also use that key with other keys like you use the shift key. For example:
• Windows Logo: Start menu
• Windows Logo+R: Run dialog box
• Windows Logo+M: Minimize all
• SHIFT+Windows Logo+M: Undo minimize all
• Windows Logo+F1: Help
• Windows Logo+E: Windows Explorer
• Windows Logo+F: Find files or folders
• Windows Logo+D: Minimizes all open windows and displays the desktop
• CTRL+Windows Logo+F: Find computer
• CTRL+Windows Logo+TAB: Moves focus from Start, to the Quick Launch toolbar, to the system tray (use RIGHT ARROW or LEFT ARROW to move focus to items on the Quick Launch toolbar and the system tray)
• Windows Logo+TAB: Cycle through taskbar buttons
• Windows Logo+Break: System Properties dialog box
• Application key: Displays a shortcut menu for the selected item
Sunday, November 25, 2007
Useful TAATOOL Commands:
Some CL commands that can be used in routine work. TAATOOL commands
Prerequisites
The following TAA Tools must be on your system
CHKOBJ2:
The Check Object 2 command is similar to the system command CHKOBJ except that CHKOBJ2 sends an escape message if the object 'is found'. The intent of the command is to simplify coding when it is considered an error if the object exists. CHKOBJ2 sends message ID TAA9891 as an escape message if the object exists. If the object does not exist, no message is sent.
Parameters required:
Object (OBJ)
The qualified object name. The library value defaults to *LIBL. *CURLIB may also be specified.
Object type (OBJTYPE)
The object type to be checked. Any value that is valid on CHKOBJ may be used.
CHKOBJ2 escape messages you can monitor for:
TAA9891 Object exists.
Example: CHKOBJ2 OBJ(Name) OBJTYPE(*PGM).
WRKF2:
The Work File 2 command provides a subfile with options to display the attributes, relations, format, and data of a file. In addition, there are options to change,edit, clear, and delete a file
Parameters required:
FILE : The qualified name of the file to be worked with. A generic name or *ALL may be entered. The library value defaults to *LIBL. *CURLIB, *ALL, or *ALLUSR may also be entered
Restrictions
Only a single member file may be deleted.
Example:
WRKF2 FILE(Filename)
Prerequisites
The following TAA Tools must be on your system
CHKOBJ2:
The Check Object 2 command is similar to the system command CHKOBJ except that CHKOBJ2 sends an escape message if the object 'is found'. The intent of the command is to simplify coding when it is considered an error if the object exists. CHKOBJ2 sends message ID TAA9891 as an escape message if the object exists. If the object does not exist, no message is sent.
Parameters required:
Object (OBJ)
The qualified object name. The library value defaults to *LIBL. *CURLIB may also be specified.
Object type (OBJTYPE)
The object type to be checked. Any value that is valid on CHKOBJ may be used.
CHKOBJ2 escape messages you can monitor for:
TAA9891 Object exists.
Example: CHKOBJ2 OBJ(Name) OBJTYPE(*PGM).
WRKF2:
The Work File 2 command provides a subfile with options to display the attributes, relations, format, and data of a file. In addition, there are options to change,edit, clear, and delete a file
Parameters required:
FILE : The qualified name of the file to be worked with. A generic name or *ALL may be entered. The library value defaults to *LIBL. *CURLIB, *ALL, or *ALLUSR may also be entered
Restrictions
Only a single member file may be deleted.
Example:
WRKF2 FILE(Filename)
The Different Views of Quality by Quality Gurus
Industry accepted definitions of quality are
a. “Conformance to Requirements” stated by Philip Crosby
b. “Fit for Use” stated by Dr. Joseph Juran and Dr. W. Edwards Deming
These two definitions are not inconsistent.
The Two Quality Gaps
Most Information Technology (IT) groups have two quality gaps: the Producer gap and the Customer gap.
The producer gap is the difference between what is specified (the documented requirements and internal standards) versus what is delivered (what is actually built).
The customer gap is the difference between what the producers actually delivered versus what the customer wanted.
a. “Conformance to Requirements” stated by Philip Crosby
b. “Fit for Use” stated by Dr. Joseph Juran and Dr. W. Edwards Deming
These two definitions are not inconsistent.
The Two Quality Gaps
Most Information Technology (IT) groups have two quality gaps: the Producer gap and the Customer gap.
The producer gap is the difference between what is specified (the documented requirements and internal standards) versus what is delivered (what is actually built).
The customer gap is the difference between what the producers actually delivered versus what the customer wanted.
Quality Control and Quality Assurance
How to recognize a control practice from an assurance practice?
Quality means meeting requirements and meeting customer needs, which means a defect-free product from both the producer’s and the customer’s viewpoint. Both quality control and quality assurance are used to make quality happen. Of the two, quality assurance is the more important.
Quality Assurance (QA) is associated with a process. Once processes are consistent, they can "assure" that the same level of quality will be incorporated into each product produced by that process.
QC is an activity that verifies whether or not the product produced meets standards.
QA is an activity that establishes and evaluates the processes that produce the products.
If there is no process, there is no role for QA. Assurance would determine the need for, and acquire or help install system development methodology, estimation processes, system maintenance processes, and so forth.
Once installed, QA would measure them to find weaknesses in the process and then correct those weaknesses to continually improve the processes.
It is possible to have quality control without quality assurance.
The following statements help differentiate QC from QA
QC relates to a specific product or service.
QC verifies whether particular attributes exist, or do not exist, in a specific product or service.
QC identifies defects for the primary purpose of correcting defects.
QC is the responsibility of the worker.
QA helps establish processes.
QA sets up measurement programs to evaluate processes.
QA identifies weaknesses in processes and improves them.
QA is a management responsibility, frequently performed by a staff function.
QA evaluates whether or not quality control is working for the primary purpose of determining whether or not there is a weakness in the process.
QA is concerned with all of the products that will ever be produced by a process.
QA is sometimes called quality control over quality control because it evaluates whether quality control is working.
QA personnel should not ever perform quality control unless doing it to validate quality control is working.
Quality means meeting requirements and meeting customer needs, which means a defect-free product from both the producer’s and the customer’s viewpoint. Both quality control and quality assurance are used to make quality happen. Of the two, quality assurance is the more important.
Quality Assurance (QA) is associated with a process. Once processes are consistent, they can "assure" that the same level of quality will be incorporated into each product produced by that process.
QC is an activity that verifies whether or not the product produced meets standards.
QA is an activity that establishes and evaluates the processes that produce the products.
If there is no process, there is no role for QA. Assurance would determine the need for, and acquire or help install system development methodology, estimation processes, system maintenance processes, and so forth.
Once installed, QA would measure them to find weaknesses in the process and then correct those weaknesses to continually improve the processes.
It is possible to have quality control without quality assurance.
The following statements help differentiate QC from QA
QC relates to a specific product or service.
QC verifies whether particular attributes exist, or do not exist, in a specific product or service.
QC identifies defects for the primary purpose of correcting defects.
QC is the responsibility of the worker.
QA helps establish processes.
QA sets up measurement programs to evaluate processes.
QA identifies weaknesses in processes and improves them.
QA is a management responsibility, frequently performed by a staff function.
QA evaluates whether or not quality control is working for the primary purpose of determining whether or not there is a weakness in the process.
QA is concerned with all of the products that will ever be produced by a process.
QA is sometimes called quality control over quality control because it evaluates whether quality control is working.
QA personnel should not ever perform quality control unless doing it to validate quality control is working.
Constraints
There are three types of constraints: key constraints, foreign key constraints, and check constraints.
· A key constraint is used to prevent duplicate information on a table. This corresponds to first normal form for a relational database (define the key). Key constraints are a prerequisite for foreign key constraints.
· A foreign key constraint (also referred to as referential integrity) defines a relationship between two tables: a dependant and a parent. A foreign key constraint ensures that rows may not be inserted in the dependant table if there isn't a corresponding row in the parent table. It also defines what should be done if a row in the parent table is changed or deleted (more details in a moment).
· A check constraint defines the rules as to which values can be placed in a column.
There are commands available for dealing with constraints on AS/400 (ADDPFCST, CHGPFCST, DSPCPCST, EDTCPCST, RMVPFCST, WRKPFCST), or you can define them in SQL using the CREATE TABLE or ALTER TABLE commands.
But by far the easiest ways of handling constraints is using the Database function in iSeries Navigator.
You can define constraints by selecting Database > System > Schemas > Your Schema > Tables. (In case you are not yet familiar with SQL terminology, a schema is a library and a table is a physical file.) Right-click on a table name and select Definition; the resulting window contains a tab for each type of constraint. On each of these tabs, you have options for Add, Remove, and Definition. The Definition option simply shows you the definition of the constraint; in order to change the definition of a constraint, you must remove it and add it again.
· A key constraint is used to prevent duplicate information on a table. This corresponds to first normal form for a relational database (define the key). Key constraints are a prerequisite for foreign key constraints.
· A foreign key constraint (also referred to as referential integrity) defines a relationship between two tables: a dependant and a parent. A foreign key constraint ensures that rows may not be inserted in the dependant table if there isn't a corresponding row in the parent table. It also defines what should be done if a row in the parent table is changed or deleted (more details in a moment).
· A check constraint defines the rules as to which values can be placed in a column.
There are commands available for dealing with constraints on AS/400 (ADDPFCST, CHGPFCST, DSPCPCST, EDTCPCST, RMVPFCST, WRKPFCST), or you can define them in SQL using the CREATE TABLE or ALTER TABLE commands.
But by far the easiest ways of handling constraints is using the Database function in iSeries Navigator.
You can define constraints by selecting Database > System > Schemas > Your Schema > Tables. (In case you are not yet familiar with SQL terminology, a schema is a library and a table is a physical file.) Right-click on a table name and select Definition; the resulting window contains a tab for each type of constraint. On each of these tabs, you have options for Add, Remove, and Definition. The Definition option simply shows you the definition of the constraint; in order to change the definition of a constraint, you must remove it and add it again.
Monday, November 19, 2007
Hidden Secrets of SBMJOB Command:
Most people use the OS/400 Submit Job (SBMJOB) command for batch processing. But SBMJOB has other powers that help to increase the capabilities of batch jobs.
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB)
This way OS/400 submits the job for execution based on the configuration of the user running the command. The job runs under the user profile of the submitting user, it uses the job description assigned to the submitting user, it's submitted to the job queue associated with the assigned job description, and it uses the scheduling and output queue priorities assigned to its job description.
There are lots of times when you need to change the defaults and modify the operating parameters of a submitted job.
For instance, you can submit your job to run under another user profile by modifying the USER parameter of a SMBJOB statement in the following manner:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TEST) USER(NEWUSER)
For this command to work, the submitting user must be authorized to the user profile assigned to the batch job. When submitted this way, the submitted job also uses the job description associated with the new user profile. The job queue, run priority, and output priority values then take their values from the new job description.
You can also submit the job to a job queue other than that associated with the job description.
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TEST)
JOBQ(QSYS/QSYSNOMAX) USER(NEWUSER)
Server jobs can be submitted to the QSYS/QSYSNOMAX job queue because it feeds into the QSYSWRK subsystem, which runs a lot of OS/400's server jobs, including many of its TCP/IP jobs. A second advantage to using QSYSNOMAX is that the QSYSWRK subsystem will accept and run an unlimited number of jobs originating from the QSYSNOMAX job queue (unlike the QBATCH subsystem, which typically runs only a few jobs at once). This means it's a great place to put any additional server jobs that you add to the system.
In addition to changing user profiles and job queues, you can set SBMJOB parameters to change the system library list for the job (the SYSLIBL parameter on the command), the current library for the job (the CURLIB parameter), and the job's initial library list (INLLIBL).
If you want to log all the CL commands that are executed in your batch job to the job's job log, set the Log CL program (LOGCLPGM) command parameter to *YES, like this:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) LOGCLPGM(*YES)
If you want to submit the job so that it is held on the job queue, use the Hold on job queue (HOLD) parameter:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) HOLD(*YES)
If you want to use SBMJOB to schedule a job to start at a certain date and time, use the Schedule date (SCDDATE) and Schedule time (SCDTIME) parameters.
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB)
SCDDATE('11/01/03') SCDTIME('10:00:00')
These jobs are placed on the job queue in a scheduled (Scd) status, and they will not run until the appointed time. If an unscheduled job is submitted to the same job queue, it will run ahead of the scheduled jobs.
Another neat trick is that you can hide submitted jobs from the Work with Submitted Jobs (WRKSBMJOB) command. To do this, set the Allow Display by WRKSBMJOB (DSPSBMJOB) parameter to *NO, and submit your job in the following manner:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) DSPSBMJOB(*NO)
If a user tries to view the progress of this job by using the WRKSBMJOB command, he won't be able to see it. Note, however, that users can still see the running job by finding it on the Work with Active Jobs (WRKACTJOB) command display or on the Work with Subsystem Jobs (WRKSBSJOB) command display.
If you don't want operators to answer predefined inquiry messages that appear during batch processing, you can set SBMJOB's Inquiry Message Reply (INQMSGRPY) parameter to tell the job how to answer messages. If you use the default, the job will use the inquiry message control value found in its corresponding job description. However, if you want your batch job to use default reply values for inquiry messages, you can submit the job with its INQMSGRPY value set to *DFT, like this:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) INQMSGRPY(*DFT)
And the final SBMJOB trick is to change the message queue to which SBMJOB sends its job completion messages. You have three choices. By default, job messages are sent to the message queue that is specified in the user profile that the job runs under. If you want to do it manually, you change the Message Queue (MSGQ) parameter of the SBMJOB statement, as follows:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) MSGQ(*USRPRF)
But if you want to change that message queue so that your messages go to the message queue of the workstation the job was submitted from, you set MSGQ to *WRKSTN and your SBMJOB statement would look like this:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) MSGQ(*WRKSTN)
And if you want to suppress the completion message altogether, change MSGQ to *NONE and the job won't send out completion messages at all:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) MSGQ(*NONE)
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB)
This way OS/400 submits the job for execution based on the configuration of the user running the command. The job runs under the user profile of the submitting user, it uses the job description assigned to the submitting user, it's submitted to the job queue associated with the assigned job description, and it uses the scheduling and output queue priorities assigned to its job description.
There are lots of times when you need to change the defaults and modify the operating parameters of a submitted job.
For instance, you can submit your job to run under another user profile by modifying the USER parameter of a SMBJOB statement in the following manner:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TEST) USER(NEWUSER)
For this command to work, the submitting user must be authorized to the user profile assigned to the batch job. When submitted this way, the submitted job also uses the job description associated with the new user profile. The job queue, run priority, and output priority values then take their values from the new job description.
You can also submit the job to a job queue other than that associated with the job description.
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TEST)
JOBQ(QSYS/QSYSNOMAX) USER(NEWUSER)
Server jobs can be submitted to the QSYS/QSYSNOMAX job queue because it feeds into the QSYSWRK subsystem, which runs a lot of OS/400's server jobs, including many of its TCP/IP jobs. A second advantage to using QSYSNOMAX is that the QSYSWRK subsystem will accept and run an unlimited number of jobs originating from the QSYSNOMAX job queue (unlike the QBATCH subsystem, which typically runs only a few jobs at once). This means it's a great place to put any additional server jobs that you add to the system.
In addition to changing user profiles and job queues, you can set SBMJOB parameters to change the system library list for the job (the SYSLIBL parameter on the command), the current library for the job (the CURLIB parameter), and the job's initial library list (INLLIBL).
If you want to log all the CL commands that are executed in your batch job to the job's job log, set the Log CL program (LOGCLPGM) command parameter to *YES, like this:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) LOGCLPGM(*YES)
If you want to submit the job so that it is held on the job queue, use the Hold on job queue (HOLD) parameter:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) HOLD(*YES)
If you want to use SBMJOB to schedule a job to start at a certain date and time, use the Schedule date (SCDDATE) and Schedule time (SCDTIME) parameters.
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB)
SCDDATE('11/01/03') SCDTIME('10:00:00')
These jobs are placed on the job queue in a scheduled (Scd) status, and they will not run until the appointed time. If an unscheduled job is submitted to the same job queue, it will run ahead of the scheduled jobs.
Another neat trick is that you can hide submitted jobs from the Work with Submitted Jobs (WRKSBMJOB) command. To do this, set the Allow Display by WRKSBMJOB (DSPSBMJOB) parameter to *NO, and submit your job in the following manner:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) DSPSBMJOB(*NO)
If a user tries to view the progress of this job by using the WRKSBMJOB command, he won't be able to see it. Note, however, that users can still see the running job by finding it on the Work with Active Jobs (WRKACTJOB) command display or on the Work with Subsystem Jobs (WRKSBSJOB) command display.
If you don't want operators to answer predefined inquiry messages that appear during batch processing, you can set SBMJOB's Inquiry Message Reply (INQMSGRPY) parameter to tell the job how to answer messages. If you use the default, the job will use the inquiry message control value found in its corresponding job description. However, if you want your batch job to use default reply values for inquiry messages, you can submit the job with its INQMSGRPY value set to *DFT, like this:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) INQMSGRPY(*DFT)
And the final SBMJOB trick is to change the message queue to which SBMJOB sends its job completion messages. You have three choices. By default, job messages are sent to the message queue that is specified in the user profile that the job runs under. If you want to do it manually, you change the Message Queue (MSGQ) parameter of the SBMJOB statement, as follows:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) MSGQ(*USRPRF)
But if you want to change that message queue so that your messages go to the message queue of the workstation the job was submitted from, you set MSGQ to *WRKSTN and your SBMJOB statement would look like this:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) MSGQ(*WRKSTN)
And if you want to suppress the completion message altogether, change MSGQ to *NONE and the job won't send out completion messages at all:
SBMJOB CMD(CALL PGM(PROGRAM)) JOB(TESTJOB) MSGQ(*NONE)
Sunday, November 18, 2007
Cost of Quality:
It’s a term that’s widely used – and widely misunderstood.
The “cost of quality” isn’t the price of creating a quality product or service. It’s the cost of NOT creating a quality product or service.
Quality costs are the total of the cost incurred by:
• Investing in the prevention of nonconformance to requirements.
• Appraising a product or service for conformance to requirements.
• Failing to meet requirements.
Prevention Cost:
The costs of all activities specifically designed to prevent poor quality in products or services.
Examples are the costs of:
• New product review
• Quality planning
• Supplier capability surveys
• Process capability evaluations
• Quality improvement team meetings
• Quality improvement projects
• Quality education and training
Appraisal Costs
The costs associated with measuring, evaluating or auditing products or services to assure conformance to quality standards and performance requirements.
These include the costs of:
• Incoming and source inspection/test of purchased material
• In-process and final inspection/test
• Product, process or service audits
• Calibration of measuring and test equipment
• Associated supplies and materials
Failure Costs
The costs resulting from products or services not conforming to requirements or customer/user needs. Failure costs are divided into internal and external failure categories.
Internal Failure Costs
Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the customer.
Examples are the costs of:
• Scrap
• Rework
• Re-inspection
• Re-testing
• Material review
• Downgrading
External Failure Costs
Failure costs occurring after delivery or shipment of the product -- and during or after furnishing of a service -- to the customer.
Examples are the costs of:
• Processing customer complaints
• Customer returns
• Warranty claims
• Product recalls
Total Quality Costs:
The sum of the above costs. This represents the difference between the actual cost of a product or service and what the reduced cost would be if there were no possibility of substandard service, failure of products or defects in their manufacture.
The “cost of quality” isn’t the price of creating a quality product or service. It’s the cost of NOT creating a quality product or service.
Quality costs are the total of the cost incurred by:
• Investing in the prevention of nonconformance to requirements.
• Appraising a product or service for conformance to requirements.
• Failing to meet requirements.
Prevention Cost:
The costs of all activities specifically designed to prevent poor quality in products or services.
Examples are the costs of:
• New product review
• Quality planning
• Supplier capability surveys
• Process capability evaluations
• Quality improvement team meetings
• Quality improvement projects
• Quality education and training
Appraisal Costs
The costs associated with measuring, evaluating or auditing products or services to assure conformance to quality standards and performance requirements.
These include the costs of:
• Incoming and source inspection/test of purchased material
• In-process and final inspection/test
• Product, process or service audits
• Calibration of measuring and test equipment
• Associated supplies and materials
Failure Costs
The costs resulting from products or services not conforming to requirements or customer/user needs. Failure costs are divided into internal and external failure categories.
Internal Failure Costs
Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the customer.
Examples are the costs of:
• Scrap
• Rework
• Re-inspection
• Re-testing
• Material review
• Downgrading
External Failure Costs
Failure costs occurring after delivery or shipment of the product -- and during or after furnishing of a service -- to the customer.
Examples are the costs of:
• Processing customer complaints
• Customer returns
• Warranty claims
• Product recalls
Total Quality Costs:
The sum of the above costs. This represents the difference between the actual cost of a product or service and what the reduced cost would be if there were no possibility of substandard service, failure of products or defects in their manufacture.
Thursday, November 15, 2007
Start QINTER Remotely:
You must have the FTP server up and running on the iSeries.
From the start menu, follow these steps:
• Start
• run
• ftp
From FTP, follow these steps:
• open rmtsys
• user id
• password
• quote rcmd strsbs qinter
• quit
Or, if you have the host servers running on your iSeries system - STRHOSTSVR SERVER(*ALL), and have Client Access installed on your workstation you might try this approach:
• Start
• Run
• rmtcmd strsbs qinter //rmtsys
Where rmtsys is the IP address or name of your iSeries system.
From the start menu, follow these steps:
• Start
• run
• ftp
From FTP, follow these steps:
• open rmtsys
• user id
• password
• quote rcmd strsbs qinter
• quit
Or, if you have the host servers running on your iSeries system - STRHOSTSVR SERVER(*ALL), and have Client Access installed on your workstation you might try this approach:
• Start
• Run
• rmtcmd strsbs qinter //rmtsys
Where rmtsys is the IP address or name of your iSeries system.
Wednesday, November 14, 2007
Basic HTML Tags:
There are really only a few HTML tags (think DDS keyword) that are used over and over on 99 percent of all Web pages. Once we learn them we can do pretty much anything with a web page. Using this we can learn how to take the existing DDS screens and convert them so that they'll run in a Web browser, using the CGI APIs.
Creating a Static HTML Page with Text Formatting:
A basic HTML page consists of opening '(HTML)' and closing '(/HTML)' HTML tags. All the relevant contents need to be placed between these tags.
Adding Text:
Type the following text into Notepad document (or any other text you like):
I am a great HTML programmer!
Click on the Notepad menu item named File, then select Save. When the Save window appears, name your file MYHTML.htm and click on the OK button. This will save your file as an HTML file. You could also have chosen to save it with an HTML extension rather than HTM; most browsers can handle either one.
To view your HTML document, use Windows Explorer to navigate to the folder you saved the MYHTML.htm file into, find this file, and then double click on it. The default Windows browser should open, displaying your new HTML document.
Formatting Text:
Add the following tags:
I am a great HTML programmer!
The tags tell HTML to bold everything between the beginning () and ending () bold tags.
The tags tell HTML to underline everything between the beginning () and ending () underline tags.
The tags tell HTML to italicize everything between the beginning () and ending () italics tags.
The centertags tell HTML to center everything between the beginning () and ending ( ) center tags.
Creating Table:
HTML tables, at their most basic, are very simple to create. An HTML table consists of an opening ''tag and a closing '
'tag. Within the tags, you will create table rows by using the '
'parameter on the '
'tag. The format is border="x", where x is a numeric value that specifies the border width. If you make the width "0" or leave the border keyword off, the table will not have a border. And, finally, to add column headings to your table, use the table heading' (
'
Adding an Image:
To tell a browser where to find a non-text item, such as an image, you have to include a special tag () that includes a parameter (src=) that tells the browser where the non- text item can be found. In other words, the tag tells the browser the HTTP address, or URL, of the image file.
''
Adding Hyperlinks:
Links allow you to provide a way for your users to jump from one Web page to another. Links, like all other HTML functions, have their own special tag. That tag is , which stands for "anchor." Every anchor tag contains a parameter that tells the browser where to take the user when he clicks on that link. The parameter "href=" points to the physical location (that is, the HTTP address, or URL) of the HTML page.
'The archival of AS/400 Team’s Concept for the Day!'
' '
Creating a Static HTML Page with Text Formatting:
A basic HTML page consists of opening '(HTML)' and closing '(/HTML)' HTML tags. All the relevant contents need to be placed between these tags.
Adding Text:
Type the following text into Notepad document (or any other text you like):
I am a great HTML programmer!
Click on the Notepad menu item named File, then select Save. When the Save window appears, name your file MYHTML.htm and click on the OK button. This will save your file as an HTML file. You could also have chosen to save it with an HTML extension rather than HTM; most browsers can handle either one.
To view your HTML document, use Windows Explorer to navigate to the folder you saved the MYHTML.htm file into, find this file, and then double click on it. The default Windows browser should open, displaying your new HTML document.
Formatting Text:
Add the following tags:
The tags tell HTML to bold everything between the beginning () and ending () bold tags.
The tags tell HTML to underline everything between the beginning () and ending () underline tags.
The tags tell HTML to italicize everything between the beginning () and ending () italics tags.
The centertags tell HTML to center everything between the beginning (
Creating Table:
HTML tables, at their most basic, are very simple to create. An HTML table consists of an opening '
and | 'tags (which stand for table data). To add a border to your table, so that it appears as if it's in a box, use the'
)' tags. |
---|
Item# | Quantity | Price |
---|---|---|
A56778 | 17.34 | |
B65657 | 19.87 |
'
Adding an Image:
To tell a browser where to find a non-text item, such as an image, you have to include a special tag () that includes a parameter (src=) that tells the browser where the non- text item can be found. In other words, the tag tells the browser the HTTP address, or URL, of the image file.
''
Adding Hyperlinks:
Links allow you to provide a way for your users to jump from one Web page to another. Links, like all other HTML functions, have their own special tag. That tag is , which stands for "anchor." Every anchor tag contains a parameter that tells the browser where to take the user when he clicks on that link. The parameter "href=" points to the physical location (that is, the HTTP address, or URL) of the HTML page.
'The archival of AS/400 Team’s Concept for the Day!'
'
Tuesday, November 13, 2007
Alpha and Beta Testing:
Typically, software goes through two stages of testing before it is considered finished. The first stage, called alpha testing, is often performed only by users within the organization developing the software. The second stage, called beta testing, generally involves a limited number of external users.
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
Monday, November 12, 2007
Interesting V6R1 CL Enhancement:
Few interesting enhancements for CL in the V6R1 release.
Close a File:
CL includes a new Close Database File (CLOSE) command that we can use to close a file. This gives another way to process a file more than once in a CL procedure. The first Receive File (RCVF) command issued against a file opens that file and retrieves the first record. Each subsequent RCVF command retrieves another record, and when there are no more records to retrieve, RCVF issues escape message CPF0864. If we CLOSE a file, RCVF re-opens the file and begins the process anew. CLOSE has only one parameter, OPNID, which indicates the file to be closed.
Example:
DCLF FILE(*LIBL/MYFILE2) OPNID(FILE2)
:
RCVF OPNID(FILE2)
:
CLOSE OPNID(FILE2)
RCVF OPNID(FILE2)
:
Copy Source Code from Other Members:
The new Include CL Source (INCLUDE) command is similar in function to RPG's /COPY and /INCLUDE directives, COBOL's COPY command, and C's #INCLUDE preprocessor directive. To make the compiler a source member, fill in the SRCMBR and SRCFILE parameters.
This command copies member SUBR1, which is in source physical file COMMONSUBR in library MYLIB.
INCLUDE SRCMBR(SUBR1) SRCFILE(MYLIB/COMMONSUBR)
Store Compiler Options:
In CL procedures, we can use the Declare Processing Options (DCLPRCOPT) command to specify compiler options.
DCLPRCOPT DFTACTGRP(*NO) ACTGRP(MYAPP) +
BNDDIR(MYAPPLIB/MYBNDDIR)
Close a File:
CL includes a new Close Database File (CLOSE) command that we can use to close a file. This gives another way to process a file more than once in a CL procedure. The first Receive File (RCVF) command issued against a file opens that file and retrieves the first record. Each subsequent RCVF command retrieves another record, and when there are no more records to retrieve, RCVF issues escape message CPF0864. If we CLOSE a file, RCVF re-opens the file and begins the process anew. CLOSE has only one parameter, OPNID, which indicates the file to be closed.
Example:
DCLF FILE(*LIBL/MYFILE2) OPNID(FILE2)
:
RCVF OPNID(FILE2)
:
CLOSE OPNID(FILE2)
RCVF OPNID(FILE2)
:
Copy Source Code from Other Members:
The new Include CL Source (INCLUDE) command is similar in function to RPG's /COPY and /INCLUDE directives, COBOL's COPY command, and C's #INCLUDE preprocessor directive. To make the compiler a source member, fill in the SRCMBR and SRCFILE parameters.
This command copies member SUBR1, which is in source physical file COMMONSUBR in library MYLIB.
INCLUDE SRCMBR(SUBR1) SRCFILE(MYLIB/COMMONSUBR)
Store Compiler Options:
In CL procedures, we can use the Declare Processing Options (DCLPRCOPT) command to specify compiler options.
DCLPRCOPT DFTACTGRP(*NO) ACTGRP(MYAPP) +
BNDDIR(MYAPPLIB/MYBNDDIR)
Sunday, November 11, 2007
Grey Box Testing:
Grey Box testing is technique that uses a combination of Black box and White box testing. Grey Box testing is not Black box testing, because the tester does know some of the internal workings of the software under test.
In Grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the Grey box testing, one take black box approach in applying inputs to the software under test and observing the outputs.
The typical gray box tester is permitted to set up his testing environment,
like seeding a database, and can view the state of the product after their
actions, like performing a SQL query on the database to be certain of the
values of columns. It is used almost exclusively of client-server testers or
others who use a database as a repository of information.
In Grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the Grey box testing, one take black box approach in applying inputs to the software under test and observing the outputs.
The typical gray box tester is permitted to set up his testing environment,
like seeding a database, and can view the state of the product after their
actions, like performing a SQL query on the database to be certain of the
values of columns. It is used almost exclusively of client-server testers or
others who use a database as a repository of information.
Tuesday, November 6, 2007
CL-Like Error Handling in RPG:
Most CL programs handle errors significantly better than RPG programs. CL handles errors better, because, it's extremely easy to trap and handle errors in CL through MONMSG. In RPG, the program and file information data structures provide error information, and the *PSSR subroutine and %STATUS built-in function can be used to trap and handle that information, but that isn't enough.
In V5R1, IBM has given RPG the capability to handle errors much the way CL does. Three new operation codes were added to RPG in V5R1 to perform kicked-up error handling: MONITOR, ON-ERROR, and ENDMON.
The MONITOR op code is used to begin error monitoring. Once the MONITOR operation is entered, the program monitors all C-specifications between it and the ENDMON operation. When a program or file exception is encountered on any statement within the monitor block, control is passed to the appropriate ON-ERROR operation, and the logic within that ON-ERROR section is performed. If all the statements within the monitor block complete successfully, control is then passed to the statement following the ENDMON operation.
The ON-ERROR operation acts much like the WHEN operation, Each ON-ERROR statement is followed by one or more statements that will execute when that specific ON-ERROR block is triggered. The block is ended when another ON-ERROR or the ENDMON statement is reached.
Any C-specifications coded after the ENDMON operation will not be trapped.
For each monitor block, at least one ON-ERROR operation must be present. The ON-ERROR operations must be coded in the same routine as the MONITOR and ENDMON operations. For example, we cannot code a MONITOR and ENDMON in the mainline and place the ON-ERROR statements in a subroutine that is called within the monitor block.
In V5R1, IBM has given RPG the capability to handle errors much the way CL does. Three new operation codes were added to RPG in V5R1 to perform kicked-up error handling: MONITOR, ON-ERROR, and ENDMON.
The MONITOR op code is used to begin error monitoring. Once the MONITOR operation is entered, the program monitors all C-specifications between it and the ENDMON operation. When a program or file exception is encountered on any statement within the monitor block, control is passed to the appropriate ON-ERROR operation, and the logic within that ON-ERROR section is performed. If all the statements within the monitor block complete successfully, control is then passed to the statement following the ENDMON operation.
The ON-ERROR operation acts much like the WHEN operation, Each ON-ERROR statement is followed by one or more statements that will execute when that specific ON-ERROR block is triggered. The block is ended when another ON-ERROR or the ENDMON statement is reached.
Any C-specifications coded after the ENDMON operation will not be trapped.
For each monitor block, at least one ON-ERROR operation must be present. The ON-ERROR operations must be coded in the same routine as the MONITOR and ENDMON operations. For example, we cannot code a MONITOR and ENDMON in the mainline and place the ON-ERROR statements in a subroutine that is called within the monitor block.
Monday, November 5, 2007
Experimental Software Engineering:
Experimental software engineering is a sub-domain of software engineering focusing on experiments on software systems (software products, processes, and resources). It is interested in devising experiments on software, in collecting data from these experiments, and in devising laws and theories from this data. Proponents of experimental software engineering advocate that experimentation is an important method in contributing to accumulation of knowledge in software engineering.
Empirical software engineering is a related concept, sometimes used synonymously with experimental software engineering. Empirical software engineering is a field of research that emphasizes the use of empirical studies of all kinds to accumulate knowledge. Methods used include experiments, variety of case studies, surveys, and statistical analyses.
The scientific method suggests a cycle of observations, laws, and theories to advance science. Experimental software engineering applies this method to software.
Empirical software engineering is a related concept, sometimes used synonymously with experimental software engineering. Empirical software engineering is a field of research that emphasizes the use of empirical studies of all kinds to accumulate knowledge. Methods used include experiments, variety of case studies, surveys, and statistical analyses.
The scientific method suggests a cycle of observations, laws, and theories to advance science. Experimental software engineering applies this method to software.
Sunday, November 4, 2007
Five cool things you can do with OpsNav:
V5R1 of Client Access Express Operations Navigator allows you to perform so many functions. Few of the most important functions are discussed below.
1. Copy Spool file to your Desktop
Double-click on the Basic Operations tree, in the left-hand pane of the OpsNav GUI. Next, click on Printer Output. If the user profile with which you are logged on to OpsNav has any associated spool files on the iSeries, they will appear in the right-hand pane of the OpsNav GUI.
To copy the iSeries spool file to your PC's desktop, you can do one of two things:
• You can right-click on a spool file in the right-hand pane of the OpsNav GUI and select COPY from the pop-up menu. Next, right-click on your PC's desktop and select Paste from the pop-up menu. The spool file will be copied to your desktop.
• Your other option is to click once on the spool file in the right-hand pane of the OpsNav GUI to select a file. While holding the left mouse button down, drag the spool file to your PC's desktop. The spool file will be copied to your desktop.
2. Create an OpsNav Shortcut to frequently used item
You can create a shortcut to any OpsNav item and store it on your PC's desktop very easily. Then, each time you want to use a function, such as changing the properties of the Telnet server, all you'll need to do is click on the shortcut itself rather than drilling down through the OpsNav tree.
To create a shortcut, click on any OpsNav item, such as the TCP/IP tree item found under Network/Servers/, and drag that item to your PC's desktop. When you let go of the left mouse button, you will be prompted to create a shortcut. Select this menu item and a shortcut to the TCP/IP servers will now exist on your desktop.
3. Edit Privileges for FTP users
To edit FTP privileges for individual users
• In OpsNav, expand the Users and Groups tree item.
• Click on the All Users tree item.
• Right-click on a user profile whose FTP privileges you wish to edit, and select Properties from the pop- up menu.
• On the Properties panel for that user, click on the Capabilities button.
• Click on the Applications tab.
• Click on the down arrow next to Access for.
• Select Host Applications.
• Expand the AS/400 TCP/IP Utilities tree item.
• Expand File Transfer Protocol (FTP).
• Expand FTP Server.
• Expand Specific Operations.
Place a check mark in those FTP actions you want this user to be able to perform, and uncheck the ones you don't want the user to be able perform
4. Generate SQL from a Logical File
• Expand the Database tree item in OpsNav.
• Click on Database Navigator.
• In the taskpad, at the bottom of the OpsNav GUI, double-click the Map your Database wizard. (If you don't see the taskpad at the bottom of your OpsNav GUI, click on the OpsNav menu item View and select the TaskPad option.)
• In the Database Navigator Map wizard, you'll see a list of iSeries libraries. Expand this list until you find the library containing the physical or logical file you want to generate the SQL from.
• Expand Indexes and then Views, under the Tables tree item.
• Select the Index (logical file) you want to generate the SQL from.
• Right-click on that logical file and select Generate SQL from the pop-up menu
You can now view the SQL that will create that logical file, or even modify it and run it if you wish.
5. Configure iSeries security using the Security Wizard
• Expand the Security tree item in the OpsNav GUI.
• Double-click the Configure the Security of this Server wizard item in the taskpad.
The AS/400 Security wizard will appear. You can now step through a series of easy and understandable questions about how you use your iSeries. When you're finished, you will be presented with a set of recommendations you can use to best protect your system. Even better, if you so desire, you can immediately apply these changes to your system
1. Copy Spool file to your Desktop
Double-click on the Basic Operations tree, in the left-hand pane of the OpsNav GUI. Next, click on Printer Output. If the user profile with which you are logged on to OpsNav has any associated spool files on the iSeries, they will appear in the right-hand pane of the OpsNav GUI.
To copy the iSeries spool file to your PC's desktop, you can do one of two things:
• You can right-click on a spool file in the right-hand pane of the OpsNav GUI and select COPY from the pop-up menu. Next, right-click on your PC's desktop and select Paste from the pop-up menu. The spool file will be copied to your desktop.
• Your other option is to click once on the spool file in the right-hand pane of the OpsNav GUI to select a file. While holding the left mouse button down, drag the spool file to your PC's desktop. The spool file will be copied to your desktop.
2. Create an OpsNav Shortcut to frequently used item
You can create a shortcut to any OpsNav item and store it on your PC's desktop very easily. Then, each time you want to use a function, such as changing the properties of the Telnet server, all you'll need to do is click on the shortcut itself rather than drilling down through the OpsNav tree.
To create a shortcut, click on any OpsNav item, such as the TCP/IP tree item found under Network/Servers/, and drag that item to your PC's desktop. When you let go of the left mouse button, you will be prompted to create a shortcut. Select this menu item and a shortcut to the TCP/IP servers will now exist on your desktop.
3. Edit Privileges for FTP users
To edit FTP privileges for individual users
• In OpsNav, expand the Users and Groups tree item.
• Click on the All Users tree item.
• Right-click on a user profile whose FTP privileges you wish to edit, and select Properties from the pop- up menu.
• On the Properties panel for that user, click on the Capabilities button.
• Click on the Applications tab.
• Click on the down arrow next to Access for.
• Select Host Applications.
• Expand the AS/400 TCP/IP Utilities tree item.
• Expand File Transfer Protocol (FTP).
• Expand FTP Server.
• Expand Specific Operations.
Place a check mark in those FTP actions you want this user to be able to perform, and uncheck the ones you don't want the user to be able perform
4. Generate SQL from a Logical File
• Expand the Database tree item in OpsNav.
• Click on Database Navigator.
• In the taskpad, at the bottom of the OpsNav GUI, double-click the Map your Database wizard. (If you don't see the taskpad at the bottom of your OpsNav GUI, click on the OpsNav menu item View and select the TaskPad option.)
• In the Database Navigator Map wizard, you'll see a list of iSeries libraries. Expand this list until you find the library containing the physical or logical file you want to generate the SQL from.
• Expand Indexes and then Views, under the Tables tree item.
• Select the Index (logical file) you want to generate the SQL from.
• Right-click on that logical file and select Generate SQL from the pop-up menu
You can now view the SQL that will create that logical file, or even modify it and run it if you wish.
5. Configure iSeries security using the Security Wizard
• Expand the Security tree item in the OpsNav GUI.
• Double-click the Configure the Security of this Server wizard item in the taskpad.
The AS/400 Security wizard will appear. You can now step through a series of easy and understandable questions about how you use your iSeries. When you're finished, you will be presented with a set of recommendations you can use to best protect your system. Even better, if you so desire, you can immediately apply these changes to your system
Thursday, November 1, 2007
SQL SET OPTION Statement:
SQL's SET OPTION statement is a powerful way to control the parameters of the DB2 execution environment in which an SQL program runs.
SET OPTION is a statement that is evaluated at "compile time." It never actually gets executed. Therefore SET OPTION can only be specified once in a program. In embedded SQL, it should be the first SQL statement in the program. (For ILE RPG programmers, this is similar to specifying compile time options in the H spec.) For SQL routines (triggers, functions and stored procedures) SET OPTION is actually implemented as a clause in the various CREATE statements.
Few important things that can be controlled by SET OPTION are,
• CLOSQLCSR (Close SQL Cursor)
• COMMIT
• DATFMT (Date Format)
• DBGVIEW (Debug View)
• DFTRDBCOL (Default Relational Database Collection)
• DLYPRP (Delay Prepare)
• DYNUSRPRF (Dynamic User Profile)
• USRPRF (User Profile)
Here is an example of how they are specified in an RPG embedded SQL program:
C/Exec SQL
C+ Set Option Commit=*NONE, DatFmt=*ISO, CloSqlCsr=*ENDMOD, DlyPrp=*YES
C/End-Exec
SET OPTION is a statement that is evaluated at "compile time." It never actually gets executed. Therefore SET OPTION can only be specified once in a program. In embedded SQL, it should be the first SQL statement in the program. (For ILE RPG programmers, this is similar to specifying compile time options in the H spec.) For SQL routines (triggers, functions and stored procedures) SET OPTION is actually implemented as a clause in the various CREATE statements.
Few important things that can be controlled by SET OPTION are,
• CLOSQLCSR (Close SQL Cursor)
• COMMIT
• DATFMT (Date Format)
• DBGVIEW (Debug View)
• DFTRDBCOL (Default Relational Database Collection)
• DLYPRP (Delay Prepare)
• DYNUSRPRF (Dynamic User Profile)
• USRPRF (User Profile)
Here is an example of how they are specified in an RPG embedded SQL program:
C/Exec SQL
C+ Set Option Commit=*NONE, DatFmt=*ISO, CloSqlCsr=*ENDMOD, DlyPrp=*YES
C/End-Exec
Wednesday, October 31, 2007
Documentation:
Software documentation or source code documentation is written text that accompanies computer software. It either explains how it operates or how to use it, or may mean different things to people in different roles.
Documentation is an important part of software engineering. Types of documentation include:
• Architecture/Design - Overview of software. Includes, relations to an environment and construction principles to be used in design of software components.
• Technical - Documentation of code, algorithms, interfaces, and APIs.
• End User - Manuals for the end-user, system administrators and support staff.
• Marketing - Product briefs and promotional collateral.
Architecture/Design Documentation:
Architecture documentation is a special breed of design documents. In a way, architecture documents are third derivative from the code (design documents being second derivative, and code documents being first). These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine.
Technical Documentation:
This is what most programmers mean when using the term software documentation. When creating software, code alone is insufficient. There must be some text along with it to describe various aspects of its intended operation. It is important for the code documents to be thorough, but not so verbose that it becomes difficult to maintain them.
End User Documentation:
Typically, the user documentation describes each feature of the program, and assists the user in realizing these features. A good user document can also go so far as to provide thorough troubleshooting assistance. It is very important for user documents to not be confusing, and for them to be up to date.
Marketing Documentation:
For many applications it is necessary to have some promotional materials to encourage casual observers to spend more time learning about the product. One good marketing technique is to provide clear and memorable catch phrases that exemplify the point we wish to convey, and also emphasize the interoperability of the program with anything else provided by the manufacturer.
Documentation is an important part of software engineering. Types of documentation include:
• Architecture/Design - Overview of software. Includes, relations to an environment and construction principles to be used in design of software components.
• Technical - Documentation of code, algorithms, interfaces, and APIs.
• End User - Manuals for the end-user, system administrators and support staff.
• Marketing - Product briefs and promotional collateral.
Architecture/Design Documentation:
Architecture documentation is a special breed of design documents. In a way, architecture documents are third derivative from the code (design documents being second derivative, and code documents being first). These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine.
Technical Documentation:
This is what most programmers mean when using the term software documentation. When creating software, code alone is insufficient. There must be some text along with it to describe various aspects of its intended operation. It is important for the code documents to be thorough, but not so verbose that it becomes difficult to maintain them.
End User Documentation:
Typically, the user documentation describes each feature of the program, and assists the user in realizing these features. A good user document can also go so far as to provide thorough troubleshooting assistance. It is very important for user documents to not be confusing, and for them to be up to date.
Marketing Documentation:
For many applications it is necessary to have some promotional materials to encourage casual observers to spend more time learning about the product. One good marketing technique is to provide clear and memorable catch phrases that exemplify the point we wish to convey, and also emphasize the interoperability of the program with anything else provided by the manufacturer.
Tuesday, October 30, 2007
Updating System i5 for Daylight Saving time changes:
In the Energy Policy Act of 2005, Congress changed Daylight Saving Time (DST) in the United States so that starting this year (2007) DST begins three weeks earlier than it did last year, on the second Sunday of March. Conversely, 2007 will also see Daylight Saving Time end one week later than last year, on the first Sunday in November.
There are two ways to handle this change. You can do a manual adjustment or you can apply IBM's new Daylight Saving Time PTFs for i5 systems running i5/OS V5R3 and V5R4.
• For i5/OS V5R4, order PTFs SI26040 and SI25990
• For i5/OS V5R3, order PTFs SI26039 and SI25991
The PTFs automate the changes needed to support the new DST rules. After application, all of the time zone rules on the system will be updated to the new DST starting and ending dates.
After applying the PTF upgrade, you can check to see if the fix has been applied by performing the following steps on all your upgraded partitions:
1. Double-check the Time Zone description that your partition is using by referencing the QTIMZON system value. On a 5250 green screen, you can check this value by using this Display System Value command (DSPSYSVAL).
DSPSYSVAL SYSVAL(QTIMZON)
2. Look up the Daylight Saving Time start and end date for your time zone value by using the Work with Time Zone Descriptions command (WRKTIMZON). Run this command to display all Time Zone descriptions on your system, like this:
WRKTIMZON TIMZON(*ALL)
The Work with Time Zone Descriptions screen will display all the different time zones that are defined on your system.
To find your QTIMZON time zone on the WRKTIMZON screen, page down the list until you see either the time zone listed in QTIMZON or a time zone with a greater than sign (>) in front of it (the default time zone will always be designated in this list by a '>' sign). Place a 5=Display details in front of your default time zone entry and press Enter. On the screens that follow, you will see the starting and ending DST dates for your time zone description. What you are interested in is whether or not the PTFs changed your time zone DST start and end dates correctly.
If your time zone rule could not be changed by the PTF upgrade (as explained above), then you should change it manually by entering the following Change Time Zone Description command (CHGTIMZON) and pressing the F4 key.
CHGTIMZON TIMZON(TIME_ZONE_DESCRIPTION_NAME)
Locate the DST start and end dates on the second CHGTIMZON screen (which can be accessed by pressing the F24=More key on the first screen, followed by the F9=All key). If your time zone DST dates were not changed, you can change them manually from this screen.
There are two ways to handle this change. You can do a manual adjustment or you can apply IBM's new Daylight Saving Time PTFs for i5 systems running i5/OS V5R3 and V5R4.
• For i5/OS V5R4, order PTFs SI26040 and SI25990
• For i5/OS V5R3, order PTFs SI26039 and SI25991
The PTFs automate the changes needed to support the new DST rules. After application, all of the time zone rules on the system will be updated to the new DST starting and ending dates.
After applying the PTF upgrade, you can check to see if the fix has been applied by performing the following steps on all your upgraded partitions:
1. Double-check the Time Zone description that your partition is using by referencing the QTIMZON system value. On a 5250 green screen, you can check this value by using this Display System Value command (DSPSYSVAL).
DSPSYSVAL SYSVAL(QTIMZON)
2. Look up the Daylight Saving Time start and end date for your time zone value by using the Work with Time Zone Descriptions command (WRKTIMZON). Run this command to display all Time Zone descriptions on your system, like this:
WRKTIMZON TIMZON(*ALL)
The Work with Time Zone Descriptions screen will display all the different time zones that are defined on your system.
To find your QTIMZON time zone on the WRKTIMZON screen, page down the list until you see either the time zone listed in QTIMZON or a time zone with a greater than sign (>) in front of it (the default time zone will always be designated in this list by a '>' sign). Place a 5=Display details in front of your default time zone entry and press Enter. On the screens that follow, you will see the starting and ending DST dates for your time zone description. What you are interested in is whether or not the PTFs changed your time zone DST start and end dates correctly.
If your time zone rule could not be changed by the PTF upgrade (as explained above), then you should change it manually by entering the following Change Time Zone Description command (CHGTIMZON) and pressing the F4 key.
CHGTIMZON TIMZON(TIME_ZONE_DESCRIPTION_NAME)
Locate the DST start and end dates on the second CHGTIMZON screen (which can be accessed by pressing the F24=More key on the first screen, followed by the F9=All key). If your time zone DST dates were not changed, you can change them manually from this screen.
Monday, October 29, 2007
Prevent someone else from peeking at your Windows 2000/XP system:
If you have a Windows 2000 or Windows XP machine connected to the Internet, chances are good that your computer's security information, including user profiles, account policies, and share names are freely available to any hacker on the Internet.
This is because, by default, Windows 2000 and Windows XP do not restrict anonymous access to the above listed information.
However, you can very easily prevent others from gaining access to this sensitive information on your PC by making one very simple change to the Windows Registry.
Here's how:
1. Click on the Windows Start button.
2. Click Run.
3. Enter "Regedit" in the Run Box and click OK.
4. The Windows Registry Editor will open.
5. Drill down through the Windows Registry to get to the following key:
6. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\LSA
7. Locate the key named restrictanonymous.
8. Double click on this key to edit it.
9. Set this key's value to 1.
10. Reboot your PC.
This setting will prevent enumeration (listing) of the Security Accounts Manager (SAM) settings on your PC. If that level of security is not strong enough for you, you can also set this key value to 2, which means that no one can access any account information on your PC without explicit anonymous permission.
This is because, by default, Windows 2000 and Windows XP do not restrict anonymous access to the above listed information.
However, you can very easily prevent others from gaining access to this sensitive information on your PC by making one very simple change to the Windows Registry.
Here's how:
1. Click on the Windows Start button.
2. Click Run.
3. Enter "Regedit" in the Run Box and click OK.
4. The Windows Registry Editor will open.
5. Drill down through the Windows Registry to get to the following key:
6. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\LSA
7. Locate the key named restrictanonymous.
8. Double click on this key to edit it.
9. Set this key's value to 1.
10. Reboot your PC.
This setting will prevent enumeration (listing) of the Security Accounts Manager (SAM) settings on your PC. If that level of security is not strong enough for you, you can also set this key value to 2, which means that no one can access any account information on your PC without explicit anonymous permission.
Work Breakdown Structure:
A Work Breakdown Structure (WBS) is a fundamental project management technique for defining and organizing the total scope of a project, using a hierarchical tree structure. A well-designed WBS describes planned outcomes instead of planned actions. Outcomes are the desired ends of the project, and can be predicted accurately; actions comprise the project plan and may be difficult to predict accurately.
One of the most important WBS design principles is called the 100% Rule. The 100% Rule...states that the WBS includes 100% of the work defined by the project scope and captures all deliverables – internal, external, and interim – in terms of the work to be completed, including project management. The rule applies at all levels within the hierarchy: the sum of the work at the “child” level must equal 100% of the work represented by the “parent” and the WBS should not include any work that falls outside the actual scope of the project, that is, it cannot include more than 100% of the work… It is important to remember that the 100% rule also applies to the activity level. The work represented by the activities in each work package must add up to 100% of the work necessary to complete the work package.
Figure 1 shows a WBS construction technique that demonstrates the 100% Rule quantitatively. At the beginning of the design process, the project manager has assigned 100 points to the total scope of this project, which is designing and building a custom bicycle. At WBS Level 2, the 100 total points are subdivided into seven comprehensive elements. The number of points allocated to each is a judgment based on the relative effort involved; it is NOT an estimate of duration. The three largest elements of WBS Level 2 are further subdivided at Level 3, and so forth. The largest terminal elements at Level 3 represent only 17% of the total scope of work. These larger elements may be further subdivided using the progressive elaboration technique described above.
A WBS is not a project plan or a project schedule and it is not a chronological listing. It is considered poor practice to construct a project schedule (e.g. using project management software) before designing a proper WBS.
Thursday, October 25, 2007
ASKQST in iSeries:
The Ask Questions (ASKQST) command shows the Search for Answers display. From this display you can search for an answer to a question. You must first search the database to determine if an answer exists before a question can be asked.
The database used by this ASKQST command is (QSTDB):
Q/A database (QSTDB) -> Specifies the Question and Answer database in which to ask a question. Generally we are not authorized for QSTB database. In such case, the possible value to provide is: *select
Lib containing Q/A Database: *QSTLIB
If *SELECT is specified on the QSTDB parameter, any Q & A database in the
Library to which you are authorized can be selected. Generally the library used is *QSTLIB
To try the usage of this command in our AS/400 system ->
From command prompt
Type command: ASKQST + F4
1) Specify Q/A database -> *SELECT
2) Specify Lib containing Q/A database -> *QSTLIB
3) Hit enter
3) In 'Search for Answers screen' Specify the database name -> HAWKQST (All Possible questions & answers related to Hawkeye tool are been posted under this database.)
4) Specify Primary topic as *ALL (default value) or take F4 & select your choice accordingly.
5) Hit enter
6) We could find the total number of questions & answers posted in database HAWKQST from TOTAL Found Parameter in the screen.
7) Hit an enter; we would be taken to the ‘Display of Answers to Questions screen’ where in we could be able to view the questions posted & its corresponding answers. Take opt 5 against to view for the answers in detail.
Just explore what ever ASKQST Database available in our system & try to start using it if we have queries!!!!!And retrieve answers for the same..
Note: More information is also available in the Basic System Operation information in the iSeries information Center at http://www.ibm.com/eserver/iseries/infocenter.
The database used by this ASKQST command is (QSTDB):
Q/A database (QSTDB) -> Specifies the Question and Answer database in which to ask a question. Generally we are not authorized for QSTB database. In such case, the possible value to provide is: *select
Lib containing Q/A Database: *QSTLIB
If *SELECT is specified on the QSTDB parameter, any Q & A database in the
Library to which you are authorized can be selected. Generally the library used is *QSTLIB
To try the usage of this command in our AS/400 system ->
From command prompt
Type command: ASKQST + F4
1) Specify Q/A database -> *SELECT
2) Specify Lib containing Q/A database -> *QSTLIB
3) Hit enter
3) In 'Search for Answers screen' Specify the database name -> HAWKQST (All Possible questions & answers related to Hawkeye tool are been posted under this database.)
4) Specify Primary topic as *ALL (default value) or take F4 & select your choice accordingly.
5) Hit enter
6) We could find the total number of questions & answers posted in database HAWKQST from TOTAL Found Parameter in the screen.
7) Hit an enter; we would be taken to the ‘Display of Answers to Questions screen’ where in we could be able to view the questions posted & its corresponding answers. Take opt 5 against to view for the answers in detail.
Just explore what ever ASKQST Database available in our system & try to start using it if we have queries!!!!!And retrieve answers for the same..
Note: More information is also available in the Basic System Operation information in the iSeries information Center at http://www.ibm.com/eserver/iseries/infocenter.
Subscribe to:
Posts (Atom)