Wednesday, April 30, 2008

Call of the Prototype:

A lot of RPG programmers are under the misconception that the CALLP operation means Call Procedure. That is because most of them come across it when they start using subprocedures. But CALLP means Call a Prototyped Procedure or Program, and it can be used in place of the CALL operation, as well as CALLB.
What's wrong with CALL and PARM?
In RPG all parameters are passed by reference. That means a pointer to the parameter is passed, not the actual value of the parameter. That, in turn, means both the passed and receiving parameter fields share the same memory location, and that is where the potential problem lies.
Figure 1 shows the code of a calling program. It calls PGMB passing Parm1, a 10 character field, as a parameter.
D TestParm DS
D Parm1 10 Inz('XXXXXXXXXX')
D Parm2 10 Inz('YYYYYYYYYY')

C Call 'PGMB'
C Parm Parm1
Figure 1: A program call with a parameter field in a data structure
Figure 2 shows the code of the called program. It has an incorrect length of 15 for Parm1.
Will the compiler tell us that it is invalid? No.
Will the program fail at run time? No.
What will happen? When control returns to the calling program, Parm1 will have a value of 'ZZZZZZZZZZ' and Parm2 will have a value of 'ZZZZZYYYYY'.
When the compiler sees a CALLP operation, it will validate to ensure that all the parameters are correct. But how does it know that the parameters are correct? You provide a prototype. A prototype is the format, or the template, or the rules, for the call operation. It is not a parameter list.
Prototypes are defined on the D specifications, as shown in Figure 3. The format of a prototype is very similar to a data structure, except that the type is PR as opposed to DS. You can provide your own name for the CALLP (PromptProduct). The EXTPGM keyword indicates that this is the equivalent of a CALL operation, and it identifies the name of the called program (PRP01R). The names of the subfields in the prototype are irrelevant, what are important are the number of subfields (i.e. parameters) and the definition of each. In the example in Figure 3 the compiler will ensure that two parameters are passed, that Parm1 is a 30 character field and that Parm2 is a 1 character field.
D PromptProduct PR ExtPgm('PRP001R')
D FirstParm 30
D SecondParm 1

C CallP PromptProduct(Parm1:Parm2)

Other features of prototype like ‘CONST’, ‘*OMIT’, ‘*NOPASS’ easy the coding process.

Monday, April 28, 2008

IP Address:

An IP address (or Internet Protocol address) is a unique address that certain electronic devices use in order to identify and communicate with each other on a computer network utilizing the Internet Protocol standard (IP)—in simpler terms, a computer address. Any participating network device—including routers, switches, computers, infrastructure servers (e.g., NTP, DNS, DHCP, SNMP, etc.), printers, Internet fax machines, and some telephones—can have its own address that is unique within the scope of the specific network. Some IP addresses are intended to be unique within the scope of the global Internet, while others need to be unique only within the scope of an enterprise.
IP addresses are managed and created by the Internet Assigned Numbers Authority (IANA). The IANA generally allocates super-blocks to Regional Internet Registries, who in turn allocate smaller blocks to Internet service providers and enterprises.
The Internet Protocol (IP) has two versions currently in use (IPv4 and IPv6). Each version has its own definition of an IP address. Because of its prevalence, "IP address" typically refers to those defined by IPv4. IPv4 only uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 (232) possible unique addresses. IPv6 is a new standard protocol intended to replace IPv4 for the Internet. Addresses are 128 bits (16 bytes) wide, which, even with a generous assignment of net blocks, will more than suffice for the foreseeable future. In theory, there would be exactly 2128, or about 3.403 × 1038 unique host interface addresses.
When a computer uses the same IP address every time it connects to the network, it is known as a Static IP address. In contrast, in situations when the computer's IP address changes frequently (such as when a user logs on to a network through dialup or through shared residential cable) it is called a Dynamic IP address.
IP addresses can appear to be shared by multiple client devices either because they are part of a shared hosting web server environment or because an IPv4 network address translator (NAT) or proxy server acts as an intermediary agent on behalf of its customers, in which case the real originating IP addresses might be hidden from the server receiving a request.

Selective Prompting for CL Commands:

You can request to prompt for selected parameters within a command. This is especially helpful when you are using some of the longer commands and do not want to be prompted for certain parameters.

Selective Prompting Character Description
?? The parameter is displayed and input-capable.
?* The parameter is displayed but is not input-capable. Any user-specified value is passed to the command processing program.
?< The parameter is displayed and is input-capable, but the command default is sent to the CPP unless the value displayed on the parameter is changed.
?/ Reserved for IBM use.
?– The parameter is not displayed. The specified value (or default) is passed to the CPP. Not allowed in prompt override programs.
?& The parameter is not displayed until F9=All parameters is pressed. Once displayed, it is input-capable. The command default is sent to the CPP unless the value displayed on the parameter is changed.
?% The parameter is not displayed until F9=All parameters is pressed. Once displayed, it is not input-capable. The command default is sent to the CPP.


Example

OVRDBF ?*FILE(FILEA) ??TOFILE(&FILENAME) ??MBR(MBR1)

Above. You can see the FROM file name but cannot change it but both TO file and MEMBER may be changed.

Thursday, April 24, 2008

Failure Mode and Effects Analysis:

A Failure mode and effects analysis (FMEA) is a procedure for analysis of potential failure modes within a system for the classification by severity or determination of the failure's effect upon the system. It is widely used in the manufacturing industries in various phases of the product life cycle. Failure causes are any errors or defects in process, design, or item especially ones that affect the customer, and can be potential or actual. Effects analysis refers to studying the consequences of those failures.

Types of FMEA:
• Process: analysis of manufacturing and assembly processes
• Design: analysis of products prior to production
• Concept: analysis of systems or subsystems in the early design concept stages
• Equipment: analysis of machinery and equipment design before purchase
• Service: analysis of service industry processes before they are released to impact the customer
• System: analysis of the global system functions
• Software: analysis of the software functions

Implementation:
In FMEA, Failures are prioritized according to how serious their consequences are, how frequently they occur and how easily they can be detected. A FMEA also documents current knowledge and actions about the risks of failures, for use in continuous improvement. FMEA is used during the design stage with an aim to avoid future failures. Later it is used for process control, before and during ongoing operation of the process. Ideally, FMEA begins during the earliest conceptual stages of design and continues throughout the life of the product or service.
The purpose of the FMEA is to take actions to eliminate or reduce failures, starting with the highest-priority ones. It may be used to evaluate risk management priorities for mitigating known threat-vulnerabilities. FMEA helps select remedial actions that reduce cumulative impacts of life-cycle consequences (risks) from a systems failure (fault).
Advantages:
• Improve the quality, reliability and safety of a product/process
• Improve company image and competitiveness
• Increase user satisfaction
• Reduce system development timing and cost
• Collect information to reduce future failures, capture engineering knowledge
• Reduce the potential for warranty concerns
• Early identification and elimination of potential failure modes
• Emphasis problem prevention
• Minimize late changes and associated cost
• Catalyst for teamwork and idea exchange between functions

Wednesday, April 23, 2008

*PSSR:

In RPG, there are Program Exception/Errors (divide by zero, array index error, etc.)
and File Exception Errors (any file error). As with MONMSG in CL, you can trap errors
at the program level or the command level.

Program Exceptions:
If a program encounters a program error, it checks to see if there is a *PSSR subroutine coded in the program. If there is, it executes the subroutine. If there isn't, the program "fails" with an error message.
A *PSSR subroutines may be executed like any other subroutine (using ExSR or CASxx). The EndSR for a *PSSR subroutine can contain a return-point instruction.
The following return-point operands can be specified on the EndSR operation for a
*PSSR subroutine, but most of them apply only if you are using the RPG cycle.
• *DETL. Continue at the beginning of detail output lines
• *GETIN. Input Record Routine
• *TOTC. Total Time Calculations
• *TOTL. Continue at the beginning of total output lines
• *OFL. Continue at the beginning of overflow lines
• *DETC. Continue at the beginning of detail calculations.
• *CANCL. Cancel the Execution of the program
• Blank. Return control to the RPG default error handler. If the subroutine was
called by the EXSR operation and factor 2 is blank, control returns to the next
sequential instruction.
When *PSSR is invoked by an error, it does not return to the statement in error. It
should be used as an exit from the program.
File Exceptions:
Just as the program has an error handling subroutine in *PSSR, each file that you define on an F spec can also have its own error handling subroutine, identified by the INFSR keyword. Each file can have its own subroutine or a subroutine can be shared between different files.
These subroutines act in exactly the same way as *PSSR, so why not use the *PSSR
subroutine? Below is an example of INFSR being defined on the F specs.
FDisplay CF E WorkStn InfSR(*PSSR)

FCustomer UF E K Disk InfSR(*PSSR)
Now, if there is an I/O error with a file, the *PSSR subroutine will be executed.
Well, not quite. The INFSR subroutine will trap all I/O errors except file open errors.
The RPG cycle is automatically opening the files before the code is loaded, therefore it cannot execute *PSSR if there is a problem during file open (e.g., level check).
To trap file open errors, you need to perform an implicit open of the file by
specifying conditional open for the file (USROPN keyword on F Spec) and use the
OPEN operation to open the file (usually coded in the *INZSR subroutine).

Error Status Codes:

There are various ways to trap errors in a RPG program like, Operation code extender, MONITOR, INFDS and *PSSR. Once the program has trapped the error, but which error is it? The %STATUS BIF provides a five-digit status code that tells you what the error is. Program Status Codes are in the range 00100 to 00999 and File Status Codes are in the range 01000 to 01999. Status Codes in the range 00000 to 00050 are considered to be Normal (i.e. they are not set by an error condition).
Status codes correspond to RPG Run time messages file in QRNXMSG (e.g., Message RNQ0100 = Status Code 00100). You can view the messages using the command:
DSPMSGD RANGE(*FIRST *LAST) MSGF(QRNXMSG) DETAIL(*BASIC)
The below table lists some of the more commonly used Status Codes.
00100 Value out of range for string operation
00102 Divide by zero
00112 Invalid Date, Time or Timestamp value.
00121 Array index not valid
00122 OCCUR outside of range
00202 Called program or procedure failed
00211 Error calling program or procedure
00222 Pointer or parameter error
00401 Data area specified on IN/OUT not found
00413 Error on IN/OUT operation
00414 User not authorized to use data area
00415 User not authorized to change data area
00907 Decimal data error (digit or sign not valid)
01021 Tried to write a record that already exists (file being used has unique keys and key is duplicate, or attempted to write duplicate relative record number to a subfile).
01022 Referential constraint error detected on file member.
01023 Error in trigger program before file operation performed.
01024 Error in trigger program after file operation performed.
01211 File not open.
01218 Record already locked.
01221 Update operation attempted without a prior read.
01222 Record cannot be allocated due to referential constraint error
01331 Wait time exceeded for READ from WORKSTN file.

Monday, April 21, 2008

Kaizen:

Kaizen is Japanese for improvement. It is a Japanese philosophy that focuses on continuous improvement throughout all aspects of life. When applied to the workplace, Kaizen activities continually improve all functions of a business from manufacturing to management and from the CEO to the assembly line workers. By improving the standardized activities and processes, Kaizen aims to eliminate waste. Kaizen was first implemented in several Japanese businesses during the country's recovery after World War II, including Toyota, and has since spread to businesses throughout the world.
Kaizen is a daily activity whose purpose goes beyond simple productivity improvement. It is also a process that, when done correctly, humanizes the workplace, eliminates overly hard work (both mental and physical), and teaches people how to perform experiments on their work using the scientific method and how to learn to spot and eliminate waste in business processes.
To be most effective kaizen must operate with three principles in place:
• Consider the process and the results (not results-only) so that actions to achieve effects are surfaced;
• Systemic thinking of the whole process and not just that immediately in view (i.e. big picture, not solely the narrow view) in order to avoid creating problems elsewhere in the process; and
• A learning, non-judgmental, non-blaming (because blaming is wasteful) approach and intent will allow the re-examination of the assumptions that resulted in the current process.
People at all levels of an organization can participate in kaizen, from the CEO down, as well as external stakeholders when applicable. The format for kaizen can be individual, suggestion system, small group, or large group. At Toyota, it is usually a local improvement within a workstation or local area and involves a small group in improving their own work environment and productivity. This group is often guided through the kaizen process by a line supervisor; sometimes this is the line supervisor's key role.
While kaizen (at Toyota) usually delivers small improvements, the culture of continual aligned small improvements and standardization yields large results in the form of compound productivity improvement. Hence the English usage of "kaizen" can be: "continuous improvement" or "continual improvement."

Retrieving the width in a file:

A quick way to retrieve the width value in the printer file is to make up a small CL program and pass it the file name and library of your print file. In that CL program, execute the following command on your print file. Read that file in your CL program, and then return the values that you want to your calling program.
DSPFD FILE(QSYSPRT) TYPE(*ATR) OUTPUT(*OUTFILE) FILEATR(*PRTF) OUTFILE(QTEMP/QAFDPRT)

How SQL Works:

When you execute an SQL command, the system determines the best way to carry out your request. That is, you concentrate on the task that needs to be done, and the system figures out how to do your task. Various software components are involved in this process, and for this discussion, you need to know about three of them.

First is the Query Dispatcher, whose job it is to decide which of the two query optimization engines it will call on to optimize and process a query. The second and third software components are the two query engines--the Classic Query Engine (CQE) and the SQL Query Engine (SQE). SQE is newer and better than CQE, but there are certain tasks that it can't carry out.

You can reference four types of files in SQL statements: DDS-defined physical files, DDS-defined logical files, SQL tables, and SQL views. SQE can't handle DDS-defined logical files. SQL views and indexes are also implemented as logical files, but they are not applicable to this discussion.

CQE handles all non-SQL queries, such as the Open Query File (OPNQRYF) command and Query/400. CQE also handles distributed queries via DB2 Multisystem.

If you wish to query a logical file from an SQL statement, consider querying the underlying physical file(s) instead. If the logical file has select/omit criteria, put the criteria in the WHERE clause. Another approach would be to create a view over the physical file and reference that view in your SQL query.

Wednesday, April 16, 2008

Date manipulation in CL:

Using the code below you'll be able to do date manipulation in a CL program, and retrieving the system date.


DCL VAR(&SYSDATE) TYPE(*CHAR) LEN(6)

DCL VAR(&YESTERDAY) TYPE(*DEC) LEN(8 0)
DCL VAR(&LILIAN) TYPE(*CHAR) LEN(4)
DCL VAR(&JUNK1) TYPE(*CHAR) LEN(8)
DCL VAR(&JUNK2) TYPE(*CHAR) LEN(23)
DCL VAR(&WDATE) TYPE(*CHAR) LEN(8)


RTVSYSVAL SYSVAL(QDATE) RTNVAL(&SYSDATE)


/* Get local time from system: When this call is +

complete, &LILIAN will contain the number of +

days between today and Oct 14, 1582. */

CALLPRC PRC(CEELOCT) PARM(&LILIAN &JUNK1 &JUNK2 *OMIT)

/* Subtracting 1 from &LILIAN will produce yesterday's date */

CHGVAR VAR(%BIN(&LILIAN)) VALUE(%BIN(&LILIAN) - 1)

/* Convert lillian to yyymmdd date */

CALLPRC PRC(CEEDATE) PARM(&LILIAN 'YYYYMMDD' &WDATE *OMIT)

CHGVAR VAR(&YESTERDAY) VALUE(&WDATE)

Note: CEELOCT and CEEDATE are APIs that exist on the system; you do not need to create them. In essence, what the CL that is supplied does is:

1) Use the CEELOCT API to convert the current date to lillian.
2) Then you add or subtract the number of days you need from the lillian date.
3) Then you use CEEDATE to convert the new lillian date back to the date format that you wish, in this case, we have used YYYYMMDD.

Copy using DDM:

Copying an RPG or CL program source from a remote iSeries into the iSeries you are currently on can be achieved using several ways. One way is by using Client Access data transfer. Another way is by using the iSeries SNDNETF command. But both take time.
Another fastest way is by using two iSeries commands: CRTDDMF and CPYF.
The CRTDDMF command sets up a DDM file (a reference file) on the local system which can be used to access a file located on a remote system. A DDM file must contain the name of the remote file, the information identifying the remote system, and the method used to access the record(s) in the remote file. Below are examples of setting up DDM files:
CRTDDMF Example 1:
CRTDDMF FILE(myLib/myDDMF) RMTFILE(rmtLib/rmtRPGSRC) RMTLOCNAME(rmtAS/400)
The above example shows setting up a DDM file by way of specifying the remote location name and uses the default remote address type of *SNA. It assumes that the DDM server at the remote location supports SNA connectivity.
Another way of setting up a DDM file is by specifying the remote IP address (nnn.nnn.nnn.nnn) for the remote location parameter. This form requires an address type of *IP and assumes that the DDM server at the remote location supports the use of TCP/IP.
CRTDDMF Example 2:
CRTDDMF FILE(myLib/myDDMF) RMTFILE(rmtLib/rmtRPGSRC) RMTLOCNAME('123.456.789.123' *IP)
When the DDM file has been created, use the CPYF command to copy any or all members from a remote source physical file referenced by the DDM file.
CPYF Example:
CPYF FROMFILE(myLib/myDDMF) TOFILE(myLib/myRPGSRC) FROMMBR(rmtMember) TOMBR(myMember) MBROPT(*ADD) FMTOPT(*NOCHK)

Monday, April 14, 2008

Inflation:

Inflation is a rise in the general level of prices over time. It may also refer to a rise in the prices of a specific set of goods or services. In either case, it is measured as the percentage rate of change of a price index.
Mainstream economists believe that high rates of inflation are caused by high rates of growth of the money supply. Views on the factors that determine moderate rates of inflation are more varied: changes in inflation are sometimes attributed to fluctuations in real demand for goods and services or in available supplies (i.e. changes in scarcity), and sometimes to changes in the supply or demand for money.
There are many measures of inflation. For example, different price indices can be used to measure changes in prices that affect different people. Two widely known indices for which inflation rates are reported in many countries are the Consumer Price Index (CPI), which measures consumer prices, and the GDP deflator, which measures price variations associated with domestic production of goods and services.
A small amount of inflation can be viewed as having a beneficial effect on the economy. One reason for this is that it can be difficult to renegotiate prices and wages. With generally increasing prices it is easier for relative prices to adjust.
There are three major types of inflation, as part of what Robert J. Gordon calls the "triangle model":
• Demand-pull inflation: inflation caused by increases in aggregate demand due to increased private and government spending, etc.
• Cost-push inflation: presently termed "supply shock inflation," caused by drops in aggregate supply due to increased prices of inputs, for example. Take for instance a sudden decrease in the supply of oil, which would increase oil prices. Producers for whom oil is a part of their costs could then pass this on to consumers in the form of increased prices.
• Built-in inflation: induced by adaptive expectations, often linked to the "price/wage spiral" because it involves workers trying to keep their wages up (gross wages have to increase above the CPI rate to net to CPI after-tax) with prices and then employers passing higher costs on to consumers as higher prices as part of a "vicious circle." Built-in inflation reflects events in the past, and so might be seen as hangover inflation.
There are a number of methods that have been suggested to control inflation. High interest rates and slow growth of the money supply are the traditional ways through which central banks fight or prevent inflation, though they have different approaches.

Improving OPNQRY Performance:

You can improve OPNQRYF performance in an interactive session by stopping status display. This is very useful when querying a very large database.
Execute following CL command before running the query:
CHGJOB STSMSG(*NONE)
This will stop displaying status message at the bottom of your screen.
To reactivate the status message display use the following command at the end of CL program:
CHGJOB STSMSG(*NORMAL)

Thursday, April 10, 2008

Invoking a PC Application through CL Program:

From a CL we can invoke and PC applications like Notepad, MS Work, Calc, etc.
STRPCCMD allows you to launch an application on a personal computer that is attached to the host iSeries system. It is the only PC Organizer (PCO) function supported by Host On-Demand for 5250 sessions. STRPCCMD can be invoked directly from the iSeries command line or through the Client Access/400 Organizer menu.
To use STRPCCMD, do the following:
1. Start a 5250 session.
2. Log into the iSeries host system.
3. Start PC Organizer. Enter the following command at the iSeries command line:
STRPCO PCTA(*NO)
Host On-Demand does not support the PC Text Assist (PCTA) function of PC Organizer. You must specify a value of *NO for the PCTA parameter.
4. To run STRPCCMD, do one of the following:
o From the Client Access/400 menu, select option 7 (Start a PC Command).
o Enter the following at the iSeries command line, then press the PF4 key:
o STRPCCMD
5. Specify the full path name of the application (for example, C:\winnt\notepad.exe) at the PC Command prompt.
o STRPCCMD PCCMD(C:\WINNT\NOTEPAD.EXE’)
6. Specify whether the computer should pause after running a command. Enter one of the following:
*YES
The computer pauses after running the PC command, then returns to the iSeries session. If the PAUSE parameter is set to *YES, Host On-Demand waits for the PC process to complete and is blocked until the process exits. Host On-Demand waits for the parent process to complete even though the PC process executes a child process.
*NO
The computer returns directly to the iSeries session.
Please try out this program to open a notepad application from your AS/400 System.
PGM
MONMSG MSGID(CPF0000 IWS4010)
STRPCO PCTA(*YES)
STRPCCMD PCCMD('C:\WINDOWS\NOTEPAD.EXE’) PAUSE(*NO)
ENDPGM

Wednesday, April 9, 2008

Encryption:

With the incredible growth of the Internet, a major concern has been how secure the Internet is, especially when you're sending sensitive information through it. There’s a whole lot of information that we don't want other people to see, such as:
• Credit-card information
• Social Security numbers
• Private correspondence
• Personal details
• Sensitive company information
• Bank-account information

Information security is provided on computers and over the Internet by a variety of methods. The most popular forms of security all rely on is encryption, the process of encoding information in such a way that only the person (or computer) with the key can decode it.
Most computer encryption systems belong in one of two categories:
• Symmetric-key encryption
• Public-key encryption

In symmetric-key encryption, each computer has a secret key (code) that it can use to encrypt a packet of information before it is sent over the network to another computer. Symmetric-key encryption is essentially the same as a secret code that each of the two computers must know in order to decode the information. The code provides the key to decoding the message.

Public-key encryption uses a combination of a private key and a public key. The private key is known only to your computer, while the public key is given by your computer to any computer that wants to communicate securely with it. To decode an encrypted message, a computer must use the public key, provided by the originating computer, and its own private key.
To implement public-key encryption on a large scale, such as a secure Web server might need, requires a different approach. This is where digital certificates come in. A digital certificate is basically a bit of information that says that the Web server is trusted by an independent source known as a certificate authority. The certificate authority acts as a middleman that both computers trust. It confirms that each computer is in fact who it says it is, and then provides the public keys of each computer to the other.
A popular implementation of public-key encryption is the Secure Sockets Layer (SSL). Originally developed by Netscape, SSL is an Internet security protocol used by Internet browsers and Web servers to transmit sensitive information. You will notice that the "http" in the address line is replaced with "https," and you should see a small padlock in the status bar at the bottom of the browser window.

In fact, sending information over a computer network is often much more secure than sending it any other way.

Voice over Internet Protocol (VOIP):


Voice over Internet Protocol (VoIP) is a protocol optimized for the transmission of voice through the Internet or other packet switched networks. VoIP is often used abstractly to refer to the actual transmission of voice (rather than the protocol implementing it). This latter concept is also referred to as IP telephony, Internet telephony, voice over broadband, broadband telephony, and broadband phone. The last two are arguably incorrect because telephone-quality voice communications are, by definition, narrowband.
Voice-over-IP systems carry telephony signals as digital audio, typically reduced in data rate using speech data compression techniques, encapsulated in a data packet stream over IP.
There are two types of PSTN-to-VoIP services: Direct inward dialing (DID) and access numbers. DID will connect a caller directly to the VoIP user, while access numbers require the caller to provide an extension number for the called VoIP user.
Some cost savings are due to utilizing a single network to carry voice and data, especially where users have underused network capacity that can carry VoIP at no additional cost. VoIP to VoIP phone calls are sometimes free, while VoIP calls connecting to public switched telephone networks (VoIP-to-PSTN), may have a cost that is borne by the VoIP user.


Corporate customer telephone support often use IP telephony exclusively to take advantage of the data abstraction. The benefit of using this technology is the need for only one class of circuit connection and better bandwidth use. Companies can acquire their own gateways to eliminate third-party costs, which is worthwhile in some situations.

Tuesday, April 8, 2008

V6R1 RPG Enhancements:

Following are the enhancements made by IBM to its all new V6R1 RPG. - This is downright intriguing.

Ø Increased Sizes

The maximum amount of storage that can be occupied by a data structure, array, or stand-alone field is now 16M. Yes, the maximum size of a character field has increased from 64K to 16M (16,773,104 bytes to be exact). This enhancement means that a whole lot of dynamic memory management procedures and pointer/user space procedures are about to disappear. It also means that operations such as XML-INTO are now more functional and less cumbersome to use.

D LongFld S a Len(25000000)
D BigDS DS a Len(60000000)
D BigArray1 S a Len(10000000)
D Dim(6)
D BigArray2 S 1a Dim(50000000)
D BigVarying S a Len(25000000) Varying(4)
D DummyPtr S *

DummyPtr = %Addr(BigVarying : *Data);

You can use the LEN keyword to define a large length.

· Since only seven positions are available to specify the length of a field in the D spec, you use the LEN keyword to specify a length greater than 9,999,999.
· The LEN keyword may also be applied to a data structure.
· The LEN keyword may also be applied to an array.
· Larger variable sizes also mean you can have a larger number of elements in an array as long as the total storage for the array does not exceed 16M. The same applies to multiple-occurrence data structures.
· Varying-length fields now require 2 or 4 bytes to store the actual length of the field. A size of 2 is assumed if the specified length is between 1 and 65535; otherwise, a size of 4 is assumed. You can specify either VARYING(2) or VARYING(4) for definitions whose length is between 1 and 65535. For definitions whose length is greater than 65535, VARYING(4) is required.
· The %ADDR BIF has also been enhanced. The optional *DATA parameter may be specified so that %ADDR returns the address of the data portion of a variable-length field.

UCS-2 and Graphic fields can have a maximum length of 8M.
The maximum size of literals has also increased:
· Character literals can now have a length of up to 16380 characters.
· UCS-2 literals can now have a length of up to 8190 UCS-2 characters.
· Graphic literals can now have a length of up to 16379 DBCS characters.

UCS-2 variables can now be initialized with character or graphic literals without using the %UCS2 built-in function. UCS-2 enhancements are available as PTFs back to V5R3.


Ø Files Defined in Subprocedures
Subprocedures have F-specs. This means that a file defined in a subprocedure is local to the subprocedure.

P GetName B

FCustomer IP E K Disk
D GetName PI 50a
D CusNo 10a
D GetCustRec Ds LikeRec(CustomerR)

/Free
Chain Cusno Customer GetCustRec;
Return GetCustRec.CustName;
/End-free

P E

A few points are worth noting:

· Input and output specifications are not generated for local files; therefore, all input and output must be done with result data structures.
· By default, files are automatically opened when the subprocedure is called and automatically closed when the subprocedure ends (either normally or abnormally). Of course, USROPN may be specified for a file so the opening and closing of the file are under your control in the subprocedure.
· You can change the default opening and closing of the file by specifying the STATIC keyword on the F-spec. This means that the storage associated with the file is static and all invocations of the subprocedure will use the same file. If the file is open when the procedure returns, it will remain open for the next call to the procedure.

Ø Templates

The TEMPLATE keyword allows you to define template data structures, stand-alone fields, and files.

· Templates for Data Structures
The concept of a template for data structures is not new. Below is the traditional way of defining and using a virtual template.


D Phone DS Based(DummyPtr) Qualified
D CountryCode 5i 0
D NDDPrefix 5
D AreaCode 5
D Number 9
D Extension 4
D IDDPrefix 5

D HomePhone DS LikeDS(Phone)
D CellPhone DS LikeDS(Phone)
D WorkPhone DS LikeDS(Phone)

/Free

HomePhone.CountryCode = 353;
CellPhone.CountryCode = 353;
WorkPhone.CountryCode = 353;


The new comparable structure being defined with the new TEMPLATE keyword. At first glance, they are fairly similar, the major difference being the use of the INT keyword on the DS definition of PHONE and the INZ(353) for the CountryCode subfield in Phone. The use of the INZ keyword with the template data structure means the same initialization may be applied to any dependent data structures (defined using LIKEDS) by specifying INZ(*LIKEDS).

D Phone DS Template Inz
D CountryCode 5i 0 Inz(353)
D NDDPrefix 5
D AreaCode 5
D Number 9
D Extension 4
D IDDPrefix 5
D HomePhone DS LikeDS(Phone)
D Inz(*LikeDS)
D CellPhone DS LikeDS(Phone)
D Inz(*LikeDS)
D WorkPhone DS LikeDS(Phone)
D Inz(*LikeDS)

· The TEMPLATE keyword may also be applied to a stand-alone field.
A definition defined with a TEMPLATE keyword may only be used as a parameter for the LIKE or LIKEDS keywords or the %SIZE, %LEN, %ELEM, or %DECPOS BIFs; it may not be used as a normal data structure or field.

· Templates for Files
The TEMPLATE keyword may be specified for files. Files defined with the TEMPLATE keyword are not included in the program; the file definition is used only at compile time. The template file can only be used as a basis for defining other files later in the program using the new LIKEFILE keyword. The LIKEFILE keyword also allows you to pass a file as a parameter.

· FCustomer IF E K Disk Template
P GetCustomer B
D GetCustomer PI N

· D customerFile LikeFile(Customer)
· D customerData LikeRec(CustomerR)
D customerKey Like(customerData.CustNo)
D Const
Chain customerKey customerFile customerData;
If %Found(customerFile);
If customerData.Status <> 'D';
Return *On;
EndIf;
EndIf;
Return *Off;
P E

The above shows a portion of a member containing a subprocedure (GetCustomer) that retrieves customer data. This member is compiled and placed in a service program. The main points to note are these (refer to the numbers above):

1. The TEMPLATE keyword is specified for the customer file. This means that the File specification is for reference purposes only, and the file definition may not be used for processing.
2. The LIKEFILE keyword identifies the first parameter as a file with the same characteristics as the Customer file.
3. The second parameter is the customer record that will be returned by the subprocedure.
4. The file parameter name is used to identify the file to be processed (i.e., customerFile not Customer). Since Input and Output specs are not generated for a template file or a file identified by LIKEFILE, you must use a result data structure for the CHAIN operation.

How do you call the GetCustomer subprocedure? The below snippet of a program shows a typical call: The file to be processed (CustFile) is simply passed as the first parameter. The format of the file CustFile must be the same as the Customer file.

FCustFile IF E K Disk
D custData DS LikeRec(CustFileR)

/Free
If GetCustomer(CustFile: CustFileR: 'THISCUST');
// DO cool things with data
EndIf;

The LIKEFILE keyword may also be used on the F-specs. The example below shows two files (CurrCust and OldCust) being defined like the Customer file. The processing options (file type, record addition, record address type, device, etc.) and most (but not all) of the keywords are inherited.

FCustomer IF E K Disk Template Block(*YES)
FCurrCust LikeFile(Customer)
F ExtFile('CURRLIB/CUSTFILE')
FOldCust LikeFile(Customer)
F ExtFile('OLDLIB/CUSTFILE')

There are a few items to bear in mind when using LIKEFILE:
• The parent file must be externally defined.
• Files are implicitly qualified; therefore, resulting data structures are required for input and output operations.
• The parent file must define any blocking requirements.
• Not all keywords are inherited. These are keywords that must be unique for each file (e.g., INDDS, INFDS, INFSR, OFLIND).
• Although the SFILE keyword may be inherited, you still need to define it for dependent files in order to specify a unique RRN field.

Ø Other File Enhancements

There are a few other file related enhancements worth having a look at.

(1) FCustomer IP E K Disk ExtDesc('MYLIB/CUSTFILE')
(2) F ExtFile(*ExtDesc)
(3) F Qualified
FScreens CF E WorkStn
D GetCustRec Ds LikeRec(CustomerR)

(4) D GetDetails E Ds ExtName('MYLIB/SCREENS' :
D Screen1 : *ALL)

/Free
(5) Read Customer.CustomerR GetCustRec;
If GetCustRec.Type = 1;
Eval-Corr GetDetails = GetCustRec;
(6) ExFmt Screen1 GetDetails;
EndIf;

1. You are aware that the EXTFILE keyword allows you to specify the file to be used when a program is called (a built-in override), but EXTFILE does not have any effect at compile time. The EXTDESC keyword allows you to specify the file definition to be referenced at compile time. This provides a means of handling SQL defined tables (where the file and format name are the same) other than renaming the record format.

2. The EXTFILE keyword allows a special value of *EXTDESC, which means that the value specified for the EXTDESC keyword should be used by the EXTFILE. Basically, you are specifying the same value for both the EXTDESC (compile time) and EXTFILE (run time) keywords.

3. The QUALIFIED keyword may be specified for files. This means that all references (except for the RENAME, INCLUDE, IGNORE, and SFILE file keywords) to record format names must be qualified with the file name.

4. The file name specified on the EXTNAME keyword may be a character literal in any of the forms 'LIBRARY/FILE', 'FILE', or '*LIBL/FILE'.

5. As with file specifications in subprocedures, Input and Output specifications are not generated for a qualified file (i.e., external fields from the file are not automatically defined as fields in the program and all I/O to the file must be done with result data structures).

6. A data structure name may be specified as the result for an EXFMT operation. This eases the use of qualified data structures with display files. The *ALL value must be specified on the LIKREC or EXTNAME keyword for the data structure.


Ø No Cycle RPG

If you have delved into the wonderful world of ILE, you are almost certain to have coded a module with the NOMAIN keyword in the control specifications. This means that the module only contains global definitions (F- and D-specs) and subprocedures and that the compiler does not place any RPG cycle code in the module since there is no mainline (a linear module). Since a NOMAIN module does not contain a Program Entry Procedure (PEP), it cannot be compiled as a callable program.

The introduction of the MAIN keyword on the control specification allows you to code a module that may be created as a program but does not contain the RPG cycle. The MAIN keyword allows you to specify the name of the subprocedure to be used as the PEP for the program. The below snippet shows the code in a member named CUST001, the member is compiled using the CRTBNDRPG command. The MAIN keyword identifies the MaintainCustomer subprocedure as the PEP for the program.

H Main(MaintainCustomer)


P MaintainCustomer...
P B
D MaintainCustomer...
D PI
/Free
// Lots of cool code
/End-Free
P E

These are a few of the many other enhancements made in V6R1 RPG.

Monday, April 7, 2008

Ins and Outs of Constrains:

Constraints are a function of Referential Integrity, where the database manager ensures the logical consistency of data values between files and the validity of data relationships, based on rules set by you. Impressive as that sounds, it is something you are already doing; except that you are doing it in your application programs. For example you cannot delete the customer if there are dependant invoices on the invoice file, and you do not employ people under the age of sixteen. Those constraints are implemented through logic in your RPG or COBOL programs. As your applications expand and data becomes accessible outside of the traditional green screen, it becomes imperative that these rules are consistent across all interfaces. What better way to implement them then through the database manager?
Constraints are defined for physical files or tables. You can define three types of Constraint: Key, Referential and Check.
How do you define constraints?
You can define constraints using the Add Physical File Constraint (ADDPFCST) command. You can also use the CHGPFCST, RMVPFCST, WRKPFCST, EDTCPCST and DSPCPCST commands.
You can define them in SQL using the CREATE TABLE or ALTER TABLE commands.
Key constraints
Key constraints define unique keys for a table. The end result is an access path, but there is no corresponding logical file. Since DB2 automatically shares access paths, there is no extra overhead if there is already a logical file that defines the access path.
There are two types of Key constraints: unique and primary. A table may have only one primary Key constraint but may have many unique Key constraints.
The same constraint could be defined on green screen using the following command:
ADDPFCST FILE(ALLTHAT1FL/CATEGOR) TYPE(*PRIKEY)
KEY(CATCOD) CST(CategoryPrimaryKey)
Referential constraints
Referential constraints are where you define a constraint between two tables: a parent and a dependant. The parent file must have a primary constraint defined for it.
In this example there is a dependency between the Category file and the Product file. Every product "belongs" to a category. Therefore, you should not be able to delete a category if any products refer to it, and you should not be able to assign a non-existent category to a product. Think how you would manage this in an application -- a logical over the product file that you use to check for existing records in the Category maintenance program and the Product maintenance program checks the Category file to make sure the category code is valid. (But you can bypass all of that with DFU.)
The same constraint could be defined on green screen using the following command:
ADDPFCST FILE(ALLTHAT1FL/PRODUCT) TYPE(*REFCST)
KEY(CATCOD) CST(CategoryProductRestriction)
PRNFILE(ALLTHAT1FL/CATEGOR) PRNKEY(CATCOD)
DLTRULE(*RESTRICT) UPDRULE(*RESTRICT)
The possible delete rules are as follows:
• RESTRICT -- Record cannot be deleted if there are dependent records.
• CASCADE -- It's OK to delete a parent, but all dependent records are deleted as well.
• SET NULL -- Null-capable fields, in the dependent key, are set to null.
• SET DEFAULT -- Fields in the dependent key are set to their default values.
• NO ACTION -- A record cannot be deleted if there are dependent records; however, triggers will be fired before checking Referential constraints.
Check constraints
Check constraints allow you to define validation for columns in a table. The nearest to this in DDS is the COMP, RANGE and VALUES keywords, but they apply only to display files. Check constraints are maintained on the database.
The same constraint could be defined on green screen using the following command:
ADDPFCST FILE(ALLTHAT1FL/PRODUCT) TYPE(*REFCST)
KEY(CATCOD) CST(Right_Price)
CHKCST('SELLPR >= LNDCST')
Referential Integrity is a powerful tool for us to use in our applications and provide a means of ensuring data integrity outside of our application.

Friday, April 4, 2008

How does RPG talk to a browser?

Please refer
http://search400.techtarget.com/tip/0,289483,sid3_gci1043521,00.html

Thursday, April 3, 2008

Multi Threading:

Multithreading is a general purpose programming technique that reduces the complexity and overhead of concurrent programming. Multithreading is the process by which identical processes run in as many threads as needed, accessing the same file in any mode.
One can simulate this multithreading concept in reading the same file and processing records from it based on an arbitrary set of criteria in a program running in many threads. For a file having "n" records which is split across "m" jobs the processing time drops by about n/m time fraction.
Processing time would depend on many factors (such as number of processors, number of other jobs etc.) and we cannot say that processing time would reduce to n/m times. All we can say is that processing time would reduce considerably.
The idea is to split the number of records in the file by "m", a predefined number that indicates the number of threads that need to be run that can be decided based on the size of the file. It decides the number of records for each thread and gets the relative record number slot for each thread needed. Say there are 100 records and we have 5 threads. Then thread 1 gets rrn 1 to 20, thread 2 gets 21 to 40 and so on. The last job gets the full rrn slot needed for it or a slot smaller than that if the value of the number of records / number of jobs is not a round number.
In a nutshell, we run multiple jobs in parallel in such a way that each job is allocated a unique set of records for processing as opposed to having one job processing all the records.

Wednesday, April 2, 2008

String Searching Algorithm:

String searching algorithms, sometimes called string matching algorithms, are an important class of string algorithms that try to find a place where one or several strings (also called patterns) are found within a larger string or text.
Let Σ be an alphabet (finite set). Formally, both the pattern and searched text are concatenations of elements of Σ. The Σ may be a usual human alphabet (for example, the letters A through Z in English). Other applications may use binary alphabet (Σ = {0,1}) or DNA alphabet (Σ = {A,C,G,T}) in bioinformatics.
In practice how the string is encoded can affect the feasible string search algorithms. In particular if a variable width encoding is in use then it is slow (time proportional to N) to find the Nth character. This will significantly slow down many of the more advanced search algorithms. A possible solution is to search for the sequence of code units instead, but doing so may produce false matches unless the encoding is specifically designed to avoid it.
Naïve String Search:
The simplest and least efficient way to see where one string occurs inside another is to check each place it could be, one by one, to see if it's there. So first we see if there's a copy of the needle in the first few characters of the haystack; if not, we look to see if there's a copy of the needle starting at the second character of the haystack; if not, we look starting at the third character, and so forth.

Finite State automation based search:
In this approach, we avoid backtracking by constructing a deterministic finite automaton that recognizes strings containing the desired search string. These are expensive to construct—they are usually created using the powerset construction—but very quick to use. This approach is frequently generalized in practice to search for arbitrary regular expressions.
Index methods
Faster search algorithms are based on preprocessing of the text. After building a substring index, for example a suffix tree or suffix array, the occurrences of a pattern can be found quickly. As an example, a suffix tree can be built in Θ(m) time, and all z occurrences of a pattern can be found in O(m + z) time (if the alphabet size is viewed as a constant).