The S-Language


The S programming language of statistical programming language was developed  Bell laboratories specifically for statistical modeling. There are two versions of  S.  One was developed by insightful under the name S-Plus.  The other is an open-source initiative called R.  S allows you to create objects and is very extendable and has power graphing capabilities.

Tips
Tip 1

Set Memory Size

memory.size(max = TRUE)
Tip 2

Today’s Date

Today <- format(Sys.Date(), %d %b %Y )
Tip 3

Set Working Directory

setwd( C:// )
Tip 4

Load In Data

ExampleData.path    <- file.path(getwd(), USDemographics.CSV ) 
ExampleData.FullSet  <- read.table( ExampleData.path, header=TRUE, sep= , , na.strings= NA , dec= . , strip.white=TRUE)
Tip 5

Split Data

ExampleData.Nrows <-  nrow(ExampleData.FullSet) ExampleData.NCol= ncol(ExampleData.FullSet) 
ExampleData.SampleSize <- ExampleData.Nrows /2
ExampleData.Sample <- sample(nrow(ExampleData.FullSet ),size = ExampleData.SampleSize ,
replace=FALSE, prob = NULL )
ExampleData.HoldBack  <- ExampleData.FullSet[ExampleData.Sample, c(5,1:ExampleData.NCol)]
ExampleData.Run   <- ExampleData.FullSet[-ExampleData.Sample, c(5,1:ExampleData.NCol)  ]
Tip 6

Create Function

Confusion <- function(a, b){
                  tbl <- table(a, b)
                  mis <- 1 - sum(diag(tbl))/sum(tbl)
                  list(table = tbl, misclass.prob = mis)
                   }
Tip 7

Recode Fields

ExampleData.FullSet$Savings 
ExampleData.FullSet$SavingsCat <- recode(ExampleData.FullSet$Savings, 
, -40000.00:-100.00 = HighNeg ; -100.00:-50.00  = MedNeg ; -50.00:10.00 = LowNeg ; 10.00:50.00 = Low ; 50.00:100.00 = Med ; 100.00:1000.00 = High ;;;  , as.factor.result=TRUE)
Tip 8

Summarize Data

Summary(ExampleData.FullSet)
Tip 9

Save output

save.image(file = c:/test.RData , version = NULL, ascii = FALSE,  compress = FALSE, safe = TRUE)
Tip 10

Subset

MyData.SubSample <- subset(MyData.Full, MyField ==0)
Tip 11

Remove Object From Memory

remove(list = c(‘MyObject’));
Tip  12

Create a Dataframe

TmpOuput <- data.frame ( Fields = c( Field1 , ‘Field2 , ‘Field3’),  Values   = c( 1 , 2 ,  2  ) )
Tip 13

Cut

data(swiss)
x <- swiss$Education  
swiss$Educated= cut(x, breaks=c(0, 11, 999), labels=c( 0 , 1 ))
Tip 14

Create Directories

dir.create( c:/MyProjects )

Unix/Linux Data Management

Data management using Unix/Linux is easy but it does have a few quirks. First, typical the header (the first row which contains the field names) is not contained in the data file requiring a scheme or layout of the data files. That is because most of the file operation does not recognize the first line as different from the rest of the file.  Also, complex multi table queries can only be achieved in multiple steps unlike SQL and SAS.  But the speed and efficiency of code make Unix/Linux a strong data management tool.

 
Example:
Customer.dat
1 Joe 23
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18
PurchaseOrder.dat
1 3 Fiction
2 1 Biography
3 1 Fiction
4 2 Biography
5 3 Fiction
6 4 Fiction
 
     

  This data would be stored without column names.

SELECT

less customer.dat
1 Joe 23
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18

ORDER BY

less customer.dat |sort -k 1.10-,1.12
5 Susan 18
1 Joe 23
3 Lin 34
2 Mika 45
4 Sara 56

WHERE

less customer.dat |awk {if (substr($0,3,5) == Susan )  print $0}
5 Susan 18

INNER JOIN

sort -k 2 purchaseorder.dat > srt_purchaseorder.dat
join -1 1 -2 2 customer.dat srt_purchaseorder.dat
1 Joe 23 2 Biography
1 Joe 23 3 Fiction
2 Mika 45 4 Biography
3 Lin 34 1 Fiction
3 Lin 34 5 Fiction
4 Sara 56 6 Fiction

LEFT OUTER JOIN

sort -k 2 purchaseorder.dat > srt_purchaseorder.dat
join -a1 -1 1 -2 2 customer.dat srt_purchaseorder.dat
1 Joe 23 2 Biography
1 Joe 23 3 Fiction
2 Mika 45 4 Biography
3 Lin 34 1 Fiction
3 Lin 34 5 Fiction
4 Sara 56 6 Fiction
5 Susan 18 NULL NULL

GROUP BY

join  -1 1 -2 2 customer.dat srt_purchaseorder.dat > sum.dat
awk BEGIN { FS=OFS=SUBSEP= }{arr[$2,$3]++ }END {for (i in arr) print i,arr[i]}   sum.dat
Joe 23 1
Mika 45 2
Lin 34 2
Sara 56 1

UPDATE

less customer.dat |awk {if (substr($0,3,5) == Susan )  print substr($0,0,9) 59  ; else print $0 }
1 Joe 26
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18

INSERT

cat customer.dat new_cust.dat
1 Joe 23
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18
6 Terry 50

DELETE

less customer.dat |awk {if (substr($0,1,1) != 1 )  print $0
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18

 

Unix Primer

Primer in Data Management In Unix/Linux

Data manipulation in Unix/Linux is powerful yet easy after some practice. Much of basic file manipulation can be achieved using the basic toolset provided with most Unix/Linux installations. First lest generate the data file we will use for this exercise.  Lets start off with some random data and a list of the files in your home directory. Type od -D -A n /dev/random | head -100 > mydata.dat
Results You will now have a 100 records data file with four columns of random numbers.  (the od command dumps data in various useable formats). Now lets created another dataset:
Type  ls –l > mydir.out

wc Word count (with –l  option will get number of lines)
Type  less mydata.dat | wc –l 
Result:  100 
gzip Compresses a file for you.  Much of the size of a file (especially a text file) can be shrunk.  The trade off for the smaller size is slow access time and the need to uncompress the file to process it. Type: gzip –c mydata.dat > mydata.gz
Result you have created a gz file from mydata.out. Type ls –l mydata.*
Result
-rw-rw-rw- 1 tharris mkgroup-l-d 2348 May 20 08:44 mydata.gz
-rw-rw-rw- 1 tharris mkgroup-l-d 4500 May 20 08:42 mydata.out Notice gzipped file is 583 bytes while and original file is 2147.
zcat   Allows you to decompress a gzipped file.  You can pipe the output to a reader like less or a file. Type zcat mydata |less
Result: The resulting output should be the same as the original file.
grep Allows you to search a file for a particular string it then output the complete line containing that string. Type grep Apr mydir.out
Result -rwx——   1 tharris        mkgroup-l-d    2402 Apr 12 09:41 myrpoject.r
-rwx——+  1 tharris        ????????       1905 Apr 12 09:29 DesktopGarpLog.txt
drwx——+  3 tharris        ????????          0 Apr 12 09:34 Favorites Note: Remember to change the month ‘Apr’  to the month you are interested in.
sed     Search and replace
Type : less mydir.out | sed s/????????/Windows /
Result: -rwx——   1 tharris        mkgroup-l-d    2402 Apr 12 09:41 myproject.r
drwx——+ 13 tharris      windows           0 Jan 28 06:45 Application Data
drwx——+  6 tharris       windows           0 May 25 06:12 Desktop
-rwx——+  1 tharris       windows        1905 Apr 12 09:29 DesktopGarpLog.txt
drwx——+  3 tharris       windows           0 Apr 12 09:34 Favorites
Now those annoying questions marks are gone. Type : sed  s/????????/Windows / mydir.out > mydir_2.out
Result: you will now have a text file called test2.out with ?????? replaced by windows.
cut Allows you to access data columns
Type cut -c 50-56 mydir_2.out | less Result (of course with different dates)
Apr 1
Apr 25
May 12
May 12
Apr 17
awk  Allows you to access data columns but is more powerful than CUT.  Both cut and awk can used like a where clause in SQL or if clause in SAS. Type
less mydir_2.out |awk {if (substr($0,51,3) == Apr )  print $0} |less
Results -rwx——   1 tharris        mkgroup-l-d    2402 Apr 12 09:41 myproject.r
-rwx——+  1 tharris       windows        1905 Apr 12 09:29 DesktopGarpLog.txt
drwx——+  3 tharris       windows           0 Apr 12 09:34 Favorites To create a file
Type  less mydir_2.out |awk {if (substr($0,51,3) == Apr ) print $0} >mydir_3.out  
sort Sort allows you to order a file in either descending or ascending order.  You can specify a column to use as the key to sort the file by. Type: sort -n -t +2 mydata.dat
Result:  The output of mydata.dat will be displayed sorted by the second field.  The ‘-n’ options is for a numeric rather than an alphabetical sort.  The ‘-r’ option is for a reverse (descending) order. The ‘ ’ indicates the file is separated by a space and the ‘+2’ means sort by the second column.
head When working against very large files it is sometimes useful to work with a subset, especially when debugging code.  The Head command allows you to do this.
Type:  less mydata.dat| head -5
Results only the top five lines of the output will be shown.
tail To work with the bottom part rather than the top of a file use tail
Type  less mydata.dat | tail -5
Result Only the bottom five records of the file will be shown.
join Unix has a join command similar to a sql join or a sas merge statement.  To test the join function lets first construction two new data sets.  Enter the following code:
less mydata.dat > mydata_2.dat   less mydata_2.dat |awk {if (substr($0,14,1) == 3 || substr($0,15,1) == 1)  print substr($0,13,10) Y ;  else print  substr($0,13,10) N } > mydata_lkup.dat Now you have two new datasets, a subsample of our original random number data set and a lookup table with a key pointing back to the ordinal data. Now type: join -1 2 -2 1 mydata_2.dat mydata_lkup.dat | less
Result:
614230376 2116315928 2808687127 1513727505 Y
2786586641 1078697315 4284908016 933354663 N
901415638 2527438256 3497368500 3894108367 N
3338765228 3463564639 3715602095 3944235862 Y
2901961487 2787207594 3739011318 4040597610 N
2380204561 2381578890 2611563505 292512547 Y
3810153523 2377573389 44853491 2382807132 Y
1853161002 851838940 4237925568 3627299786 N
2070425071 1236857502 150640963 2672607003 N
534159806 1991382958 2279021152 3452133675 N Note col2 has been swapped with col1 and new data has been appended to the end of the data set. The code -1 2 -2 1 indicates which field to use for the join.  In this example we want col2  in our dataset to match to col1 in the look up table.
paste Another way to join two files is to use the past command.  Apast will merge two file horizontally regardless of a key value.  If you two files are sorted properly and do not contain any unlinked values like the dataset we constructed paste is a faster way to merge the files. Type paste mydata_2.dat mydata_lkup.dat Result 2116315928  614230376 2808687127 1513727505     614230376 Y 1078697315 2786586641 4284908016  933354663    2786586641 N 2527438256  901415638 3497368500 3894108367     901415638 N 3463564639 3338765228 3715602095 3944235862    3338765228 Y 2787207594 2901961487 3739011318 4040597610    2901961487 N 2381578890 2380204561 2611563505  292512547    2380204561 Y 2377573389 3810153523   44853491 2382807132    3810153523 Y   51838940 1853161002 4237925568 3627299786    1853161002 N 1236857502 2070425071  150640963 2672607003    2070425071 N 1991382958  534159806 2279021152 3452133675     534159806 N Past can also be used to pivot a file or two files so that all the text is on one file Type  paste –d: -2 mydata_2.dat Results All the data will be on one line.  This is sometimes useful in data processing.
split   This command is used to break apart a file into smaller parts. Type split -l 10 mydata.dat new
Results you will have ten new files called newa newb … newj each with 10 observations.  
uniq This command will create output with each line sequential line that is identical collapsed to a unique value. Type less mydata_lkup.dat|cut -c 12|sort|uniq
Result
N
Y Now let’s see what happen if we remove the sort command. Type less mydata_lkup.dat|cut -c 12|uniq
Result
Y
N
Y
N
Y
N Without the sort command only identical sequential lines are collapsed.  

Quick Unix/Linux Guide

Unix/Linux is an operating system mainly used as servers in a business setting.  There are numerous customer oriented Unix/Linux editions as well as CYGWIN (a virtual Linux OS available for Windows).  Many of the Unix/Linux commands for file operations covered here are like those you typically done via the Windows GUI.  If you are familiar with DOS, these commands of the Unix/Linux equivalents to the standard DOS file operation commands.  There are numerous GUI front end to Unix/Linux (CDE, KDE, Window Maker, OSX, …) that allow you to execute these commands via menus or the mouse like you can from Windows.  The power of using commands, however, is speed, clarity in what you are trying to achieve, repeatability and, if put into a script, repeatability. Another benefit is not having to hunt through multiple layers of menus to find (if it even exists) the command you want.  This section will cover the basic file operation commands available in Unix/Linux.  Most of the commands listed here are available in the korn and bash shells. If you do not have access to a Unix of Linux machine I recommend downloading and installing CYGWIN. You can find it here.

xterm A Unix/Linux terminal is a application that allows you to communicate with a system.  Typically when you start up a terminal it has a shell attached to it that allows commands to be sent and received from the Unix/Linux system. An xterm shell is like a command windows in windows.  Two of the most common Unix/Linux shells are korn (ksh) and bash. 
& Runs command as a separate process (thread) Type: xterm &
Results: A new command shell should appear and you will be able to use both command shells.  If you had type xterm without the & you would not be able to use the first command shell until you exited the first.
cd change directory
Type cd ~
Result ~ is your home directory.
ls gets a list of all files and directories in current directory  (-l options gives a detailed view and –a to see configurations files.)  In Unix configurations files have a “.” prefix. Type: ls –l
Results: something like this on a CYGWIN installation: -rwx——   1 tharris        mkgroup-l-d    2402 Apr 12 09:41 myproject.r
drwx——+ 13 tharris        ????????          0 Jan 28 06:45 Application Data
drwx——+  6 tharris        ????????          0 May 25 06:12 Desktop
-rwx——+  1 tharris        ????????       1905 Apr 12 09:29 DesktopGarpLog.txt
drwx——+  3 tharris        ????????          0 Apr 12 09:34 Favorites The first column with the cryptic sequence of letters if indicates the rights and permissions.
-rwx: a file with read, write and execute permissions
drwx: a directory with read, write and execute permissions
The second column is the user who created the file (owner). 
The third is the Unix group from which the file was created.  File created while in Window do not have Unix groups so appear as ????????.
The forth is the file size followed by creation data and file name
PIPING |   Pipe or redirect output to another command.  This allows you to chain multiple commands together without having to create files at each immediate step. >  Pipe or redirect output to a physical file. Type:  ls –l > test.out    
Results: Now you have created a text file is the deletes of the file located
mkdir Make a directory. Type: mkdir test
Results: type ls and see your directory listed.
mv Moves file. This is useful to rename output files when debugging a process.
Type: mv test.out list.out
Results: now the file test.out has been renamed to list.out.
cp Copy a file or directory (with -r option).
Type cp list.out test.out
Results Now you have two file list.out and test.out
rm Remove a file or directory.
Type rm list.out
Results the list.out file has been deleted.
more Allows read-only access to a file.  To quit out of MORE type :q.
Type: more test.out
Result: The output should look the same as if you ran the ls –l command.
less Also allows read-only access to a test file.  The name is a misnomer; LESS has greater capabilities than MORE.  Like more to quit type :q.  Both MORE and LESS have far more capability than I will discus here.
vi and emacs Two powerful text editors for Unix.  VI is a command line text editor and does require a little sit down time with the manual.  Once you have mastered a few simple commands VI is a quick tool for editing text.  If, however, you want a tool more familiar to Windows text editors try Emacs.
echo To make text appear in the command window use the echo command.  This can be useful to alert users to how a script or program is running.
Creating a script   To create a script type emacs myscript.sh & in the command prompt.  Now lets put all that we have learn into use.  Type the follow code: #!/usr/bin/sh
echo I am starting
mkdir mytest
ls -l > ./mytest/test2.out
cd ./mytest
cp test2.out test3.out
mv test3.out test4.out
echo I am done. Now save the file by clicking on the familiar save icon. Everything should look familiar except #!/usr/bin/sh.  That line tells Unix which shell to use to execute the script.
chmod The change the permissions or mode of a file use chmod.  A good setting is 775. Type chmod 775 myscript.sh
Results now we can execute the script myscript.sh.
Executing a Script   Type ./myscript.sh
Result:
I am starting
I am done Note: If nothing appears between the two statements in the command terminal the script ran successful. To check the results change to the directory my testand check the results.
sleep  If you want a script to pause between command use the sleep command.  This can be useful to enable the user to kill a process that spawns lots of other processes at way points in the script. Type sleep 360
Results the command terminal should pause for 10 minutes
nohup Allows you to execute a script and log out without termination the script.  This is useful with scripts that take a long time to run.
Type nohup ./myscript.sh
Result: nohup: appending output to `nohup.out  
        [1] 272
You will not see anything in the command window.  The output will be redirected to a text file.  The text [1] 272 tells you the process id and will be different for you.
Reading output from Nohup Type less nohup.out
ps Allows you to see the status of processes (with the -p option you can specify a process to examine).  This can be useful when running long jobs.  It is similar to task manager. Example, add the following code to your script: sleep 360. Now when you run the script it will pause for 10 minutes.  This will give us time to use the PS command. Type nohup ./myscript.sh
Result: nohup: appending output to `nohup.out
           [1] 5056 Type ps –p 5056
Result:  
      PID    PPID    PGID     WINPID  TTY  UID    STIME COMMAND
     5056    1896    5056       2168    0 156949 08:23:37 /usr/bin/sh Another example is to show all the processes you have spawned.
Type ps –u
Results: a STIME COMMAND
     2528       1    2528       2528  con 156949 08:03:43 /usr/X11R6/bin/Xwin
     2336       1    2336       2876  con 156949 08:03:43 /usr/bin/xterm
     1220       1    1220       2984  con 156949 08:03:43 /usr/X11R6/bin/wmaker
     3912    1220    1220       1260  con 156949 08:03:45 /usr/X11R6/bin/wmaker
     4368    2336    4368       4528    1 156949 08:03:49 /usr/bin/bash
     5056    1896    5056       2168    0 156949 08:23:37 /usr/bin/sh
     6088    5056    5056       5848    0 156949 08:23:40 /usr/bin/sleep
     4612    1896    4612       3736    0 156949 08:25:24 /usr/bin/ps
top List the top 10 processes on the machine.  Important to tell whether you are playing nice with others.
kill  Allows a user to stop a process 
Type: kill 6088
       Process 6088 will be terminated regardless of state
nice If you are not playing nice with others (ie you are hogging resources) you can change the priority of your process to allow others to get done with what ever they need to do.  Nice enables this.  The values range from 1(highest) to 19 (lowest). 10 is the default. Type ; nice -17 nohup ./myscript.sh
Result: the process will be towards the end of priority.  When allocating resources Unix will prioritize other user processes over yours.

Unix/Linux vs DOS

 

Command

UNIX

DOS

List files ls dir
Change directory cd cd
Comments # rem
File permissions chmod attrib
copy cp xcopy
Print text in console echo echo
Spawn a new thread & call
Read a test file Type more or less
Delete file rm del
Delete directory rmdir rd
Create a directory mkdir mkdir
Copy a file or directory cp cp
Move a file rename mv
piping > >
Edit a file vi or emacs edit

R/Splus

The S language’s power is not its data management capability nor is data management the intent of the S language.  However, often times when evaluating the output of a model you may need to perform basic data management with R/SPlus and you will find the S language acceptable in this role.  The commands will seem more similar to Unix/Linux than SQL. However, the S language has many of the benefits of Unix/Linux (a concise language for data management) while being more data centric (allowing meta data for dataframes which includes column names).
Example:
Customer
CustomerID Name Age
1 Joe 23
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18
PurchaseOrder
POID CustomerID Purchase
1 3 Fiction
2 1 Biography
3 1 Fiction
4 2 Biography
5 3 Fiction
6 4 Fiction
 
     
 

SELECT

customer
CustomerID Name Age
1 Joe 23
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18

ORDER BY

customer[order(customer[,2]),]
CustomerID Name Age
5 Susan 18
1 Joe 23
3 Lin 34
2 Mika 45
4 Sara 56

WHERE

subset(customer, custid == 5 )
CustomerID Name Age
5 Susan 18

INNER JOIN

merge (purchaseorder, customer, by.x = custid , by.y = custid , all = FALSE   )
CustomerID Name Age POID Purchase
1 Joe 23 2 Biography
1 Joe 23 3 Fiction
2 Mika 45 4 Biography
3 Lin 34 1 Fiction
3 Lin 34 5 Fiction
4 Sara 56 6 Fiction

LEFT OUTER JOIN

merge (purchaseorder, customer, by.x = custid , by.y = custid , all = TRUE   )
CustomerID Name Age POID Purchase
1 Joe 23 2 Biography
1 Joe 23 3 Fiction
2 Mika 45 4 Biography
3 Lin 34 1 Fiction
3 Lin 34 5 Fiction
4 Sara 56 6 Fiction
5 Susan 18 NULL NULL

GROUP BY

Cust_sum <- merge (purchaseorder, customer, by.x = custid , by.y = custid , all = FALSE   ) xtabs( ~ fname, cust_sum)
Joe Lin Mika Sara
2 2 1 1

UPDATE

customer[1,]$age <-23
 
CustomerID Name Age
1 Joe 26
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18

INSERT

newcust <- data.frame(custid = 6, fname = Terry , age =50) rbind(newcust,customer)
 
CustomerID Name Age
1 Joe 23
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18
6 Terry 50

DELETE

subset(customer, custid != 1 )
 
CustomerID Name Age
2 Mika 45
3 Lin 34
4 Sara 56
5 Susan 18

 

Example R Function

You can extend R/S-Plus usability by writing functions. This is similar to macros in the SAS Language. These functions can be anything from a new statistical algorithm to file operation to data manipulation. Below I give an example of a custom R macro. This function takes output from an rpart tree and converts it to SAS code suitable to use in a data a step. This is useful when coding nodes into a model. This dirty little secret: I developed the code by looking at the default print method for the rpart package and adapting it to generate SAS code. This code can also be modified to generate SQL code as well. When attempting to write new code I suggest first looking at published package that do something similar then try to adapt them to your own use.

The S language (which both R and SPlus use) is similar to C. There are many good editor for S. this code was written using TinnR.

printSAS.rpart <- function(x, minlength=0, spaces=2, cp,
digits=getOption( digits ), …) {

tree.depth <- getFromNamespace( tree.depth , rpart )

if(!inherits(x, rpart )) stop( Not legitimate rpart object )
if (!is.null(x$frame$splits)) x <- rpconvert(x) #help for old objects
if (!missing(cp)) x <- prune.rpart(x, cp=cp)
frame <- x$frame

ylevel <- attr(x, ylevels )
node <- as.numeric(row.names(frame))
depth <- tree.depth(node)
indent <- paste(rep( , spaces * 32), collapse = )

#32 is the maximal depth
if(length(node) > 1) {
indent <- substring(indent, 1, spaces * seq(depth))
# indent <- TT
indent <- paste(c( , indent[depth]), format(node), ) , sep = )
}
else indent <- paste(format(node), ) , sep = )

tfun <- (x$functions)$print
if (!is.null(tfun)) {
if (is.null(frame$yval2))
yval <- tfun(frame$yval, ylevel, digits)
else yval <- tfun(frame$yval2, ylevel, digits)
}
else yval <- format(signif(frame$yval, digits = digits))

 

z <- labels(x, digits=digits, minlength=minlength, …)

term <- rep( , length(depth))
final <- rep( , length(depth))
temp1 <- rep( , length(depth))
tempnode <- rep(10000, length(depth))
term[frame$var == <leaf> ] <- Terminal

for(i in 1:length(depth)) #print(1:i)
{
if(term[i] != Terminal )
{
final[i] <-
}

if(term[i] == Terminal )
{
for(j in 1:length(depth))
{
if (node[i – j] == 1) break
if(term[i – j] != Terminal )
{
if (node[i – j] != tempnode[i] -1)
{
if (node[i – j ] < tempnode[i])
{
temp1[i] <- paste ( z[i – j], And , temp1[i] )
tempnode[i] = node[i – j]
}
}
}

} # end for

final[i] <- paste ( If , temp1[i] , z[i], then NodeVal = , yval[i] , ; )
}# end if *
} # end Main loop
cat(final, sep = \n ) ## Print results
}