Wednesday, December 1, 2004

CISC Architecture will die?

As we know, Intel's x86 architecure is based on an old technology called CISC (Complete Instruction Set Computer). The newer technology of microprocessors is based on RISC (Reduced Instruction Set Computer). The difference is that CISC uses a complete set of instructions and many of them with different length, thus pipelining, pre-fetching and other new schemes to improve parallelism hard to achieve. Meanwhile, in RISC all instructions are in the same length. Another thing is that RISC is based on architecture that is called "register-register" or Load-Store, which does not have accumulator, but all of the registers are General Purpose Registers (GPR).

Optimizations achieved by RISCs are done through compiler assistance. Thus, in the desktop/server market, RISC computers use compilers to translate high-level codes into RISC instructions and the remaining CISC computer uses hardware to translate microcodes. One recent novel variation for the laptop market is the Transmeta Crusoe, which interprets 80x86 instructions and compiles on the fly into internal instructions. The similar way is done on recent Intel Pentium4 architecture with its NetBurst, superscalar etc.

The oldest architecture in computer engineering is stack architecture. In 196, a company called Burroughs delivered the B5000 which is based on stack architecture. Stack architecture is almost obsolete, until Java Virtual Machine came from Sun. Some processors also still use this stack architecture. For example, floating point processing on x86 processors or some embedded microcontrollers.

In the early 1980s, the direction of computer architecture began to swing away from providing high-level hardware support for languages. Ditzel and Patterson analyzed the difficulties encountered by the high-level language archictectures and argued that the answer lay in simpler architectures. In another paper, these authors first discussed the idea of RISC and presented the argument for simpler architecture. Two VAX architects, Clark adn Strecker, rebutted their proposal.

In 1980, Patterson and his colleagues at Berkeley began the project that was to give this architectural approach its name. They built two computers called RISC-I and RISC-II. Because the IBM project on RISC was not widely known or discussed, the role played by the Berkeley group in promoting the RISC approach was critical to the acceptance of the technology. They also built one of the first instruction caches to support hybrid format RISC. It supported 16 and 32-bit instructions in memory but 32-bit in the cache. The Berkeley group went on to build RISC computer targeted toward Smalltalk, and LISP.

In 1981, Hennessy and his colleagues at Stanford University published a description of the Stanford MIPS computer. Efficient pipelining and compiler-assisted scheduling of the pipeline were both important aspects of the original MIPS design. MIPS stood for "Microprocessor without Interlocked Pipeline Stages", reflecting the lack of hardware to stall the pipeline, as the compiler would handle dependencies.

In 1987, a new company named Sun Microsystems started selling computers based on the SPARC architecture. SPARC is a derivative of the Berkeley RISC-II processor. In 1990s, Apple, IBM, and Motorola co-developed a new RISC processor called PowerPC. The processor is now used in every computer made by Apple. The latest PowerPC is G5 which is dual-core RISC processor. As we see, Apple's Mac computers are basically speedier than Intel's x86 architecture, but because Intel is strong in its marketing and always talks about "GigaHertz" clock performance, many people still think that higher clock speed on processors always corresponds to faster processing, which is not always the case. All graphic card producers such as NVidia and ATI also base their graphic coprocessors on RISC architecture, even with more advanced technologies (just for your info, NVidia's GeForce6 GPUs have more transistors than the latest Pentium4 Extreme Edition).

Why then the old technology (CISC in x86) still can survive? The answer is machine-level compatibility. With millions of x86 installed in most PCs worldwide, Intel ofcourse wants to keep it that way. Their RISC project cooperate along with HP (I64 architecture [one of the product is named Itanium], which is based on RISC) could not repeat their success in x86. But, although x86s use CISC from code perspective, they actually are more RISC and CISC as the processors now borrow technologies from RISC, such as pipelining, pre-fetch, super-scalar, branch prediction and parallelism (which is popularized by Intel with terminology: SIMD, such as in MMX, SSE, SSE2 and SSE3 codes).

Monday, November 22, 2004

Zuse is a father of Digital Computer?

I just read the biography of Konrad Zuse. Interesting and very encouraging. Apparently, he deserves to be called the father of Digital Computing Machine. His inventions such as Z1 to Z4 had pioneered the uses of digital instead of analog computing. His Zuse-3 was in fact the first operational program controlled calculating machine, using the binary floating point numbers and Boolean circuits. In 1936 Zuse made a patent application on some of its parts, which proves that he had developed various major concepts of the digital computer long before men like Von Neumann or Burks presented their ideas.

He is also the father of programming language. In 1945/1946 he finished his "Plankalküll", the world's first programming language, thus establishing his name as a software pioneer. It was presented to the public in 1972.

It is hard to trace who is truly the father of computer or computing machine. There is no single person credited for the works. From Pascal, Babbage, Turing, Atanasoff, Mauchley and Eckart and Von Neumann. They all contributed to the invention of computer to what we see and use nowadays.

His name also reminds me about a german-linux distro, SuSe. I believe it is named after him or borrows his name.

Thursday, November 18, 2004

Java is going open source?

There is recently a news that Sun is going to make Java Platform Standard Edition Environment as open source, at least to non-profit and academic organizations. This is a breakthrough for Java community and seems another "attack" to Microsoft which is still keeping their Windows private for most people.

Another Sun's plan is to make its Solaris 10 open. The copyright is not under GNU, but seems under similar one. Will it take people out of Linux environment? We still need to see this. But, so far Sun's GUI is far behind than Windows, even Linux in term of quality. The new operating system will work on Opteron, Xeon, and UltraSparc.

Tuesday, November 16, 2004

HD (high-definition) video is stalled again

HD (high-definition) video is stalled again. That refrain is familiar to those of us who have waited the better part of a decade to get our HDTV. But this time, high-definition is DVD stuck in the standards conundrum. The situation perfectly illustrates the complexities involved in setting standards for state-of-the-art products—with a global plot twist thrown in for good measure.

The DVD industry’s track record when it comes to standards is far from perfect. Remember when Sony, Philips, and others went against the DVD Forum to establish theDVD+RW format after the Forum shunned the +RW technology in favor of DVD-RAM and DVD-RW? That fight delayed the widespread adoption of DVD recorders for three years.


Now, the industry must address the move toward HDTV-level 1080i (1080-line, interlaced) resolution for DVD content. Consumers who have spent big money on HDTV monitors are waiting.

A product such as DVD involves many standards issues, including factors such as power and interfaces. But two major issues demand the most attention: the recording format and the video-encoding format. Initially, industry players both inside and outside the DVD Forum considered two approaches. The first involved staying with the existing 9-Gbyte format and using more aggressive encoding to pack a feature-length, high-definition movie onto one disc. The DVD Forum, working on what it terms HD-DVD, favored this conservative approach because it would maintain full compatibility with existing discs. Sony, Matsushita, and others favored a move to “Blu-ray” technology. By changing to a "blue"-wavelength laser, Blu-ray would allow a disc to store 25 Gbytes. However, a player would need two lasers—red andblue—to play both old and new discs.

Now, Toshiba and NEC have produced a compromise, which the DVD Forum has endorsed. The duo has developed a blue laser that can provide higher capacity and also read today’s discs. The compromise reduces capacity to 20 Gbytes, 5 Gbytes fewer than Blu-ray.

Of course, the Blu-ray group wants nothing to do with the compromise. This spring, the group formed its own industry body, the BDA (Blu-ray Disc Association). Hey, if you can’t get your way in this industry, just create your own standards body. The game is clearly about getting your own technology embedded into the next standard, so that you can collect royalties on top of the profit that you make selling your own products.

Meanwhile, a battle raged for a while on the encoding side. The BDA initially appeared to be sticking with the MPEG-2 encoding that existing DVDs use. On the DVD Forum side, Microsoft entered the battle, trying to get its Windows Media technology into the next standard. As of press time, a rare outbreak of logical thinking seems to have taken place: Both the BDA and the DVD Forum have announced plans to support MPEG-2, H.264, and Microsoft’s Windows Media 9.

So, for now, we wait. Hollywood hasn’t weighed in with the standard that it prefers. Meanwhile, Sony has proclaimed that its Playstation 3 will use BDA technology. The BDA is also aggressively pursuing datacentric applications in addition to next-generation DVD video. And manufacturers will soon ship expensive, rewritable BDA products.

Enter China. Chinese companies and the Chinese government already had a major dislike for the DVD technology the the rest of the world uses. Specifically, they didn’t like paying royalties to the companies who had key technologies embedded in the DVD standards. And you can bet that Chinese vendors didn’t want to wait for the high-definition conflict in the rest of the world to play out.

So a standards organization of the Chinese government—SAC (Standardisation Administration of China)—rolled out a new spec, EVD (Enhanced Video Disc). The spec is complete, and vendors are shipping early products. North American vendors, such as LSI Logic, are offering EVD chip sets. High-definition Chinese content is trickling into the Chinese market, with some Hollywood content expected next year.

There’s nothing like governments, multiple international standards bodies, and the collaboration of private industry associations to stave off adoption of a compelling new technology.

Friday, October 22, 2004

RE: Printing dates on digital camera pictures

A very useful free software that adds the date. And it can be run in
batchmode so you don't have to manually process every single file.

Here is the link.

http://www.friedemann-schmidt.com/software/exifer/

Thursday, October 14, 2004

Slurping Kho Ping Ho's Novel Books

Are you one of many Indonesians who love reading classical martial art novels of Kho Ping Ho? If yes, you might have probably known that www.detik.com online has been providing online edition of his novels. At the time I am writing this blog, it is publishing "Harta Karun Jenghis Khan".

Unfortunately, the site provides only a few pages of the current novel every day (although the past novels are achived there), but one album might takes hundreds of these pages. I am a very lazy person in term of reading this site everyday. If you are like me, I have developed two simple scripts to download the whole album therefore becomes readable offline. One thing you need to know that, in order to get a complete set of the novel, you have to wait until the last episod gets published. Currently, the default link for the site you need to pass to geturl.tcl script is http://jkt.detik.com/khopingho/ [ name of the album ] /episode1.shtml

To run the script:

- do: geturl.tcl http://jkt.detik.com/khopingho/[albumname]/episode1.shtml. For example: geturl.tcl http://jkt.detik.com/khopingho/hartakarunjenghiskhan/episode1.shtml
- type: merge.tcl
- Enter the number of episodes (the number of episod*.shtml files you just downloaded, or any number that quite big such as 5000)
- The result will be: albumname.html, and there will be a directory called "images" which will contain all the pictures (if any).

Ok, enough talking, now save the following script as geturl:


#----------- geturl.tcl -------------------
#!/bin/sh -e
# exec tclsh "$0" ${1+"$@"}

if {[lindex $argv 0] == "" } {
puts "$argv0 "
exit
}

set url [lindex $argv 0]
set urlpath [file dirname $url]
set logfile "[file tail $urlpath]\.log"

puts "Getting $url ..."
puts "log file: $logfile"
puts "You can see the progress by typing \"tail -F $logfile\""

set par "-nv --force-html --tries=0 --cache=on --convert-links --recursive --accept=shtml --domains=detik.com --no-direct
ories --glob=on -L -p -m --page-requisites -np -nd -o [file tail $urlpath]\.log $url"

set res [ exec sh -c "wget $par" ]

#----------------------- end of geturl.tcl -----------------------


and the following as "merge":


#----------------------- start of erge.tcl ----------------------
#!/usr/bin/tclsh

proc AskAndGet { msg }
puts -nonewline $msg
flush stdout
return [gets stdin]
}

set n [AskAndGet "Number of files: "]
set title [string range [pwd] [string last "/" [pwd]] end]

puts "Title = $title"
set fho [open "merged.html" w]
puts $fho ""
puts $fho ""
puts $fho "\n\n"
puts $fho ""

for {set i 1} {$i <= $n} {incr i} { set fn "episode${i}.shtml" if {[file exists $fn]} { puts "File $fn exists...wait while I merge it ...." set fhi [open $fn r] set line [gets $fhi] set line [ string trim $line ] set line "$line\n" while {![eof $fhi] & ![regexp -nocase {} $line]} { set line [gets $fhi] set line [string trim $line] set line "$line\n" } ## found the start point, now read it until we find while {![eof $fhi] & !([regexp -nocase { } $line] || [regexp -nocase { } $line]) } { set line [gets $fhi] if {[regexp -nocase {Episode belum ada atau sudah habis} $line ] } { continue } if {[regsub -all {\xC2} $line "" line]} { puts "0xC2 found and been removed" #exit } if {[regsub -all {[\x93]} $line {"} line]} { #puts "OPENQUOTE: \{$line\}" } if {[regsub -all "\x94" $line {"} line]} { #puts "CLOSEQUOTE: \{$line\}" } if {[regexp -nocase {http://jkt.detik.com/khopingho/images/(.*).jpg} $line dummy imgname]} { if {![file exists "./images"]} { file mkdir "./images" } set imgname "${imgname}.jpg" if {![file exists "./images/$imgname"]} { puts "Downloading picture: $imgname" set imgurl "http://jkt.detik.com/khopingho/images/$imgname" eval { exec wget -q $imgurl } if {[file exists $imgname]} { exec mv $imgname "./images" } } else { puts "$imgname exists in ./images; not downloaded" } regsub -nocase {http://jkt.detik.com/khopingho/images/(.*).jpg} $line "./images/${imgname }" line } puts $fho $line #puts $line if {[regexp -nocase {} $line] || [regexp -nocase {.*>[ ]*[ ]*TAMAT[ ]*} $line]} {
puts "End of episode $fn"
break
}
}
close $fhi
} else {
; #puts "File $fn does not exists"
}
}

puts $fho "\n"
close $fho

#------------------------ end of merge.tcl -----------------------

Tuesday, October 12, 2004

Fermat's Last Theorem

Have you known that Fermat's last Theorem has been proven in 1994 by Andrew Wiles, a british mathematician working at University of Princeton, USA. He got his Ph.D from Univesity of Cambridge, UK. See http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Fermat's_last_theorem.html

Pierre de Fermat (1601 - 1665) is a french lawyer (yes, a lawyer!) who pursued maths in his spare time. He is most famous for scribbling a note in the margin of a book by Diophantus that he had discovered a proof that the equation xn+yn = zn has no integer solutions for n>2. He stated "I have discovered a truly marvelous proof of this, which however the margin is not large enough to contain." The proposition, which came to be known as Fermat's last theorem, baffled all attempts to prove it until A. Wiles succeeded in 1995.

For detail, see http://mathworld.wolfram.com/FermatsLastTheorem.html

Monday, October 11, 2004

Quantum Computing

Interesting articel!. As it says, the future computer will not be based on tiny semiconductor gates, but will be using atoms' spins as its logic. It forecasts that around year 2030, the width of a single wire in a microprocessor will reach the width of single atom and this is the limit, if scientists do not find other ways for computer technologies.

Computers built using this physics of quantum mechanics will be 1 BILLION faster than a Pentium-III PC. So for applications such as decryptography (cracking up secured code) can be made in minutes instead of months or years. Amazing!

In August 2000, researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer developed to date. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by nuclear magnetic resonance (NMR) instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.

Full article can be read at:
http://computer.howstuffworks.com/framed.htm?parent=quantum-computer.htm&url=http://www.amd1.com/quantum_computers.html

Friday, October 8, 2004

Today's Faster ever-made PC in the world?

Recently, I was browsing to the Internet and google "fastest PC" and got to a site called "Michael's SuperComputers" @ http://www.michaelscomputers.com/. They said it outperforms it's nearest competition 12X faster than Apple's G5 Dual 2.5GHz.

According to the site, the computer is horsepowered with Pentium4 3.6 GHz Extreme edition (I wonder how much this processor cost? In Apr-Jun 2004, I checked the price some online stores it was above $1000 bucks!). The other thing is that they say it uses SATA-X Hyperdrives. I never hear about this SATA-X, but I guess it's a modification of SATA, or the next version of SATA.

For multimedia outputs, they use the top-end VGA card either from NVidia 6800 Ultra or ATI X800E 256 MB and SoundBlaster's Platinum Audigy ZX soundcard for the audio.

I doubt it is the fastest PC now as it lacks PCI-X slots, no information about Firewire2 (800+ Mbps transfer rate), support for dual or multi-core processor (a processor with dual/multiprocessor on single die, which I believe it will be faster than current processor) but you better check it out.

Saturday, October 2, 2004

Internet Radio

Have you tried Shoutcast? It is very interesting plug-in for Winamp. Using the plug-in, you can broadcast the MP3 files being player by Winamp to the Internet. Unfortunately, the method it uses is unicast, meaning that every user that is connected to your server will double the bandwidth usage unlike multicast-based servers. If you know any free MP3 multicast server for Linux, please let me know.

I like it when I tried with my Linux, it is very efficient although for Linux there is command line only (well, according to www.shoutcast.com, you can use XMMS player but I tried it with no success). For every user connected to the server, it opens a thread/worker to give service. The listener itself listens at port 8001, while port 8000 is used to give a website that you can see what song is playing, song history, and statistics (for the admin only - you can set the password for this).

If you are interested, you can try it or pay a visit to my experimental site: mlutfi.homelinux.org

Myths about NOR and NAND Flash

Myth 1: NAND Flash is slower than NOR.
The Reality:
The performance characteristics of NAND Flash are: fast write (or program) speed , fast erase speed and medium read speed. This makes NAND Flash ideal for low cost, high density, high speed, program/erase applications. Read More

Although NOR Flash offers a slight advantage in random read access times, NAND offers significantly faster program and erase times. For high performance data storage requirements, such as storing digital photos, downloading music and other advanced features popular in today's cell phones, the write/erase speeds of NAND provide a distinct performance advantage. This high performance is also what has made NAND Flash cards so widely used in data storage applications such as digital cameras.

Comparing the time required to perform a typical program and erase sequence for NOR and NAND Flash, for a 64KB erasable unit of memory, NAND outperforms NOR by a wide margin, at 17 milliseconds for NAND, and 2.4 seconds for NOR. In a system application, this difference is large enough to be easily noticed by the user. For the read function, the NAND performance is sufficient to support the system requirement, without a noticeable delay for the user.

Today, many designers build upon the conventional cell phone memory architecture by increasing density of the NOR and PSRAM, and adding NAND Flash to obtain greater performance and capacity for data storage.

Myth 2: NAND is not reliable
The Reality:
Just as a hard disk drive is widely accepted with little concerns about bad sectors, NAND works in a similar way in that the controller maps around bad memory areas and error correction code (ECC) is used to correct bit errors. All controllers for NAND Flash have built-in ECC to automatically correct bit errors. Read More

Myth 3: NAND Flash is hard to integrate into a system.
The Reality:
NAND Flash has an indirect or I/O-like access. Therefore, it must be accessed through a command sequence instead of through the direct application of an address to the address linesNAND Flash also has internal command, address and data registers. Today, a wide selection of NAND controllers and software drivers are available, making integration into a system relatively simple. Read More

Myth 4: MLC NOR is close to matching NAND capacities.
The Reality:
The maximum available density currently available in MLC NOR Flash is 256Mb. The highest available capacity for MLC NAND Flash is currently 2Gb, and the highest available capacity for SLC NAND Flash is 1Gb. Read More

Myth 5: MLC NAND won't hold up under extended use.
The Reality:
MLC Flash has a different rating for the number of read/write cycles compared to SLC NAND Flash. Currently, SLC Flash is rated to have approximately 100,000 cycles and MLC Flash is rated to have approximately 10,000 cycles. However, if a 256MB MLC card can typically store 250 pictures from a 4-megapixel camera (a conservative estimate), its 10,000 read/write cycles, combined with wear-leveling algorithms in the controller, will enable the user to store and/or view approximately 2.5 million pictures within the expected useful life of the card. That number is so far beyond the average number of photos taken by the typical user that the difference in endurance is not significant for this application. Read More

Myth 6: MLC NAND does not have the performance or endurance to reliably store your digital photos.
The Reality:
MLC NAND is rated to have approximately 10,000 cycles, a level that is lower than SLC NAND, but more than sufficient to meet the needs of the vast majority of consumer users. A significant portion of the NAND Flash-based memory cards on the market today are made from MLC NAND, and the continuing rapid growth of this market can be considered an indication that the performance is meeting consumers' needs. Read More

Myth 7: MLC NAND does not have high enough performance for streaming video.
The Reality:
The performance of MLC NAND is sufficient to support the 6 to 8 Mbits/second, transfer rate needed to store MPEG2 compressed video on a memory card. This works out to approximately 1MB/second. MLC NAND can transfer and write approximately 1.7MB/second.

Myth 8: SLC NAND is a generation ahead of MLC NAND.
The Reality:
On Toshiba's roadmap, SLC development leads MLC by only two to three months. Presently, for each new generation, SLC chips are designed with MLC requirements in mind, so there is little lag-time between the two types of NAND. Read More

Myth 9: The additional circuitry needed for MLC NAND takes up a significant amount of real estate.
The Reality:
The circuitry required for MLC NAND is relatively minimal. A 4Gb MLC NAND Flash chip provides approximately 1.95 times greater density than a 2Gb SLC NAND chip. We believe that the more important question to the user is "what density can you get in a chip today?" Presently, the highest density MLC NAND Flash in production is 4Gb, whereas the highest density SLC NAND in mass production is 2Gb. The market demand for ever-higher densities of removable storage makes the lower-cost, higher density MLC card attractive to users and continues to enable new applications to emerge. Read More

Myth 10: NAND Flash is a slow storage technology.
The Reality:
NAND Flash offers excellent performance for data storage. As a point of comparison, it can offer significantly faster performance and reliability than a hard disk drive, depending on the number and size of files transferred. For a random access of a 2kB file, a typical hard disk drive might take approximately 10ms to retrieve a file, while NAND Flash would take about 0.13ms to retrieve a similar size file. For a comparable write function with the 2kB file, NAND could be as much as 20 times faster. Because it is a solid state memory with no moving parts, NAND flash features a significantly shorter random access time compared to a mechanical hard disk drive.


Wednesday, September 29, 2004

Should I buy a new Flash Memory now?


Good question. Based on recent ads, the prices of flash memory have tumbled significantly compared to few months ago. One of online merchant I visited (www.tigerdirect.com) even offers $9.99 for 256 Kingston CompactFlash memory (I guess it is 1x speed).

Depends on your need, the price range can go from US $9.99 up to couple hundreds of dollars for the top memory (such as 4 GB, 40x speed). The speed plays a big role in pricing. The access speed (read/write) factor is similar to CD drive (150 KB/sec is for 1x speed, 300 KB/sec is for 2x speed so on). The new high speed CF sometimes is called CF II.

One thing I still hate to see is there are too many variants and different standard of flash memory. There is CompactFlash I/II, there is Memory stick, MemoryStick Pro, MemoryStick Duo, SmartMedia, SmartDigital, MM and what else, I don't remember. Why don't these people just make one single standard then our life would be better, isn't?

For people who are eager to see and compare the prices, check www.shopping.com, www.dealtime.com, www.mysimon.com, www.techbargains.com, or www.ebay.com. There are many other online shopping comparation portals but I cannot list them all here. Just search them at google, you will see many of them. Comments from previous buyers on these sites are many times useful. The more buyers put comments, the more confidence (or inassurance) you may get. Just check them out!
Point-to-Point Protocol

Introduction


The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network layer address negotiation and data-compression negotiation. PPP supports these functions by providing an extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to negotiate optional configuration parameters and facilities. In addition to IP, PPP supports other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet.



PPP Components

PPP provides a method for transmitting datagrams over serial point-to-point links. PPP contains three main components:

General Operation

To establish communications over a point-to-point link, the originating PPP first sends LCP frames to configure and (optionally) test the data link. After the link has been established and optional facilities have been negotiated as needed by the LCP, the originating PPP sends NCP frames to choose and configure one or more network layer protocols. When each of the chosen network layer protocols has been configured, packets from each network layer protocol can be sent over the link. The link will remain configured for communications until explicit LCP or NCP frames close the link, or until some external event occurs (for example, an inactivity timer expires or a user intervenes).

Physical Layer Requirements

PPP is capable of operating across any DTE/DCE interface. Examples include EIA/TIA-232-C (formerly RS-232-C), EIA/TIA-422 (formerly RS-422), EIA/TIA-423 (formerly RS-423), and International Telecommunication Union Telecommunication Standardization Sector (ITU-T) (formerly CCITT) V.35. The only absolute requirement imposed by PPP is the provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link layer frames. PPP does not impose any restrictions regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.

PPP Link Layer

PPP uses the principles, terminology, and frame structure of the International Organization for Standardization (ISO) HDLC procedures (ISO 3309-1979), as modified by ISO 3309:1984/PDAD1 "Addendum 1: Start/Stop Transmission." ISO 3309-1979 specifies the HDLC frame structure for use in synchronous environments. ISO 3309:1984/PDAD1 specifies proposed modifications to ISO 3309-1979 to allow its use in asynchronous environments. The PPP control procedures use the definitions and control field encodings standardized in ISO 4335-1979 and ISO 4335-1979/Addendum 1-1979. The PPP frame format appears in Figure 13-1.


The following descriptions summarize the PPP frame fields illustrated in Figure 13-1:

The LCP can negotiate modifications to the standard PPP frame structure. Modified frames, however, always will be clearly distinguishable from standard frames.

PPP Link-Control Protocol

The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection. LCP goes through four distinct phases.

First, link establishment and configuration negotiation occur. Before any network layer datagrams (for example, IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters. This phase is complete when a configuration-acknowledgment frame has been both sent and received.

This is followed by link quality determination. LCP allows an optional link quality determination phase following the link-establishment and configuration-negotiation phase. In this phase, the link is tested to determine whether the link quality is sufficient to bring up network layer protocols. This phase is optional. LCP can delay transmission of network layer protocol information until this phase is complete.

At this point, network layer protocol configuration negotiation occurs. After LCP has finished the link quality determination phase, network layer protocols can be configured separately by the appropriate NCP and can be brought up and taken down at any time. If LCP closes the link, it informs the network layer protocols so that they can take appropriate action.

Finally, link termination occurs. LCP can terminate the link at any time. This usually is done at the request of a user but can happen because of a physical event, such as the loss of carrier or the expiration of an idle-period timer.

Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, and link-maintenance frames are used to manage and debug a link.

These frames are used to accomplish the work of each of the LCP phases.

Summary

The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links. PPP also established a standard for assigning and managing IP addresses, asynchronous and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for added networking capabilities.

PPP provides a method for transmitting datagrams over serial point-to-point links, which include the following three components:

  • A method for encapsulating datagrams over serial links
  • An extensible LCP to establish, configure, and test the connection
  • A family of NCPs for establishing and configuring different network layer protocols

PPP is capable of operating across any DTE/DCE interface. PPP does not impose any restriction regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.

Six fields make up the PPP frame. The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection.

Review Questions

Q—What are the main components of PPP?

A—Encapsulation of datagrams, LCP, and NCP.

Q—What is the only absolute physical layer requirement imposed by PPP?

A—The provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link layer frames.

Q—How many fields make up the PPP frame, and what are they?

A—Six: Flag, Address, Control, Protocol, Data, and Frame Check Sequence.

Q—How many phases does the PPP LCP go through, and what are they?

A—Four: Link establishment, link quality determination, network layer protocol configuration negotiation, and link termination.

Ubuntu Linux - Another Distro

another Linux distro coming. It is Ubuntu (unfamiliar with the name? me either, but sounds like an african language). Well, it is based on an african language but I forgot exactly what it means (something about "peace").

Anyway, not like other distros that use KDE, this distro comes with GNOME as its default desktop GUI. I have not tried GNOME desktop for a while, so I cannot comment about the latest GNOME.

For more detail, check this out: http://www.ubuntulinux.org/

Saturday, September 25, 2004

STI cell processor
Next generation processors


According to this website STI Cell Processor, a very sophisticated and advanced microprocessor is being jointly designed by 3 giant companies of microelectronics: Sony, Toshiba and IBM. The processor will be used for 21th century applications such as multimedia in living room, game console and other applications that may require broadband access.

The interesting thing from this story is that the broadband access will be more widely used in households not only for entertainment equipments, but also appliances such as smart microwave, smart refrigerator, or smart HVAC (Heat, Ventilation and Air Conditioning). This string of applications will definitely require a powerful microprocessor, not only for general computation as on PCs today but also for many real-time processes in embedded systems.

Many impressing nanoelectronic technology breakthroughs and inventions will be implemented on to this microprocessor. Among other things are SOI (Silicon on Insulator); 65-nm EUV (Extreme Ultra Violet) lithography; Cell architecture (similar to how human brain works); low-k (low dielectric) which means more silicon components (transistors, diodes, etc.) can be packed into a small die; copper wire.

This $400-million project will definitely change the way we think about a "PC" as the processor is considered as "supercomputer-on-a-chip". According to the site, the processor will even be more powerful than IBM's Big Blue supercomputer, one of the fastest computer in this universe. Not only because the processor will do Tera Floating Operation Per Seconds (Tera-FLOPS), but also because it will have about 20 "mini-cores" which work independently but in coherent and can be grouped all together programmably through software.

Another interesting part is that Sony will use this processor for its next generation game console, PS3. If we look at how amazing the NVidia 6800 Ultra performs but yet with much lower FLOPS compared to this Cell processor, you can imagine how good it can be with this "Tera FLOPS" Cell processor.

Utilizing massive data bandwidth and vast floating point capabilities, coupled with a parallel processing architecture, the Cell processor based development environment is expected to deliver quantum-leap innovation to entertainment applications. Cell-based workstations will be designed to expand the platform for creating digital content across future movie and video game entertainment industries.

Many applications, especially in multimedia and gaming, will be very boosted in performance by this chip. Video rendering processes which now might take hours, even days, can be done in minutes or even seconds. Ultra clear super surround sound, hyper-realistic 3D animation and other unimaginable possibilities and capabilities with current processors will be easily achieved by computers using these chips.

I believe the era of "WinTel" will soon dim, and new era of computing will shine. One thing I want to underline is that I believe the first operating system to support this chip is Linux. Believe me!

Tuesday, September 21, 2004

Confusing DRAM nomenclatures

Just figured out the different term for different kind of DRAM as listed in the following table:



















StandardSpeed/ClockName
DDR266 MHzPC2100
DDR333 MHzPC2700
DDR2400 MHzPC3200
DDR2533 MHzPC4200
DDR2675 MHzPC5400

Saturday, September 18, 2004

What is CORBA?

I just get started to learn more about this architecture. CORBA (Common Object Request Broker Architecture) is a new paradigm of interfacing clients to server(s). I just google it and found out there are many different packages for this, from commercieal packages such as Visibroker to open source ones.

Check this http://orbit-resource.sourceforge.net/ as one of them. It has many links related to ORB.
What is the Fastest Gaming PC in this year?

According to MaximumPC magazine, Alienware's Aurora ALX is the winner, followed closely by Falcon's Mach V. The Aurora is equipped with 2.6 GHz Athlon 64 FX-53 processor, ASUS A8V mobo, VIA K8T800 Pro chipset, 1 GB DDR400 (PC3200) RAM, GeForce 6800 Ultra VGA card, RAID0 HD, SB Audigy 2ZS audio card, 1 CD-RW, 1 DVD 2-layer writer. It is able to run a game like DOOM3 in 1280x1024 pixel resolution with 4AA enabled at 83.4 fps!

Wait a few more years as we see there are new technologies coming to these home PC, such as PCI Express (some mobos and video cards have already used this slot), new ATX form factor, dual or even multicore processors, Microsoft Longhorn OS. Not to mention GigE and Wi-Fi interfaces that some mobos have already used on their product lines.

Saturday, September 11, 2004

Build or just buy a prebuilt PC?

Just checked some computer makers' web sites, I've found out that pre-built PCs are cheaper to buy than buy the components separately and assembly them. For instance, a gaming PC 710G from Gateway.com is priced "only" $2000 equipped with Pentium4 3.2 GHz 800 MHz FB, 1 GB DDR-RAM, 250 GB SATA HD, 19" monitor, 256 MB NVidia GForce 5950G Ultra, SoundBlaster Audigy2, etc.

Friday, September 3, 2004










The chipset's designer did not comment, however, on the reasons for the PSP's delay. Sony has already said that the PSP will not ship in the U.S. until early 2005, and reports have surfaced that software makers don't believe the handheld will be released here until June 2005. Sony will ship the PSP in Japan later this year.



Not only will the PSP signal a new round in the console wars by challenging the established Nintendo Game Boy and new Nintendo DS, but the player will also begin the introduction of 3D-dedicated game into the handheld space.



The PSP will be based around four key blocks: the main CPU core, the media engine, the dedicated graphics processor, and the "Virtual Mobile engine," a reconfigurable assistant chip that will also be used in Sony's Walkman portable music player to conserve battery life. At press time, it wasn't clear whether each block would be integrated or broken out into a separate chip.



Some of the basic capabilities of the PSP player have already been disclosed. The game player will include a 4.3-inch widescreen TFT LCD, will contain a lithium-ion battery, and process AAC and MP3 music and AVC/@MP for pictures and movies. Games and other content will be stored on a 1.8-Gbyte UMD optical disc. The PSP is said to measure 70mm x 74mm x 23mm, and weigh 260 grams.




Sony PSP Game Processing Unit

In a presentation at the Hot Chips conference here Tuesday, designer Masanobu Okabe described further details of the PSP chipset, which the company concealed with the non-specific title: "A 90-nm embedded DRAM single-chip LSI with a 3D graphics H.264 codec engine and a reconfigurable processor".



Sony's PSP Embedded DRAM Specs
click on image for full view




Sony Computer Entertainment executives said in May that the PSP would be powered by a MIPS R4000 embedded CPU. Okabe said Tuesday that the CPU will run at speeds up to 333-MHz, with a bus that can run at speeds up to 166-MHz, depending upon the application load. In low-load situations, Okabe said, the chip will power down unused blocks. The entire chip will total 6 million gates and an undisclosed amount of transistors. Sony will fabricate the chip in a 7-layer, copper-enhanced 90-nanometer process.



Sony PSP System Chip Block Diagram
click on image for full view



To save power, the chip core's voltage will range between 0.8 and 1.2 volts. Okabe declined to disclose the average power consumption of the chip or the 3D engine, claiming that the power will vary depending on the application.



The host CPU block will also contain a security sub-block designed to protect data and help prevent hacking the PSP, its games, or the stored data.



Okabe's presentation of the I/O also contained some unexpected surprises. Early disclosures of the PSP indicated that the player would be capable of communicating via 802.11b WiFi. The only I/O functions Okabe described were USB 2.0 and Memory Stick, Sony's small-form-factor flash memory format.



Sony PSP Chip Summary
click on image for full view


The PSP's graphics engine will feature a 512-bit interface, Okabe said, pushing 664 million pixels or 35 million polygons per second. Freed from the need to conform to any other graphics API besides its own, Sony decided to support some basic graphics primitives as well as directional lighting, clipping, environment projection and texture mapping, fogging, alpha blending, depth and stencil tests, and dithering, all using either 16- or 32-bit color. The 166-MHz graphics core will include 2-Mbytes of embedded graphics memory.


Sony's PSP Embedded DRAM Specs
click on image for full view



Sony apparently will support a graphics model based on surfaces, rather than polygons. Okabe displayed an illustration of a cartoon character that looked more realistic than a polygon-based model, which he said contained the same amount of data. The graphics block will also be capable of vertex blending, a morphing technology that can interpolate changes made between objects.


Sony PSP Graphics Chip Specs
click on image for full view



"Small data size is advantageous to mobile data software," Okabe said.



Unfortunately, the purpose of the VME still remains a bit of a mystery. The reconfigurable logic will run at 166-MHz, and apparently reconfigure its internal 24-bit datapath in a single clock cycle, into configurations suited for H.264, a video algorithm based on MPEG-4, as well as game sounds and sound effects. Since the VME must be reconfigured for each operation, attendees here said they assumed that the PSP will not be able to combine video with external sound effects. However, Okabe said that the decoder would be the fastest found in any consumer-electronics device at the time of the PSP's launch.



Sony PSP Media Processing Unit
click on image for full view


Sony PSP Graphics Module Block Diagram
click on image for full view


Sony PSP Reconfigurable Virtual Mobile Engine
click on image for full view


Sony's VME Dissected
click on image for full view




Sunday, August 29, 2004

Recompiling KDE-3.3 with all the bells and chimes

Well, I just solved one little problem during compilation of the new KDE-3.3 with success. There was problem while I was trying to compile kdelibs with options --mt and --with-threading as it stopped with error message something like "invalid ELF header" while compiling kdedoctools. Turned out, it was caused by libpthread.so which is actually a text file contains a pointer to the real shared object file (libpthread.so.22). Seems my new kernel (2.6.8.1) did not like it. After rename it to libpthread.so.1, the compilation went successfuly.

Phew!

Monday, August 23, 2004

Some Interesting Softwares (Free!)

http://www.security.nnov.ru/advisories/timesync.asp
He just touches the surface, of course, and is only delving into some aspects of one particular implementation - but what we're seeing is  that folks are gaining  a greater understanding of these types of issues from a systems approach...

Apparently, we aren't talking about simple brute-forcing or birthday attacks, either. Antoine Joux just presented a paper on this subject at Crypto 2004 in Santa Barbara - did anyone attend?

Here's another one on MD5, MD4, HAVAL-128, and RIPEMD:
http://eprint.iacr.org/2004/199/

Parallel SSH - What is that? http://www.theether.org/pssh

RE: Some Interesting Softwares (Free!)


http://www.security.nnov.ru/advisories/timesync.asp


He just touches the surface, of course, and is only delving into some aspects of one particular implementation - but what we're seeing is that folks are gaining greater understanding of these types of issues from a systems approach . . .
Apparently, we aren't talking about simple brute-forcing or birthday attacks, either. Antoine Joux just presented a paper on this subject at Crypto 2004 in Santa Barbara - did anyone attend?

Here's another one on MD5, MD4, HAVAL-128, and RIPEMD: http://eprint.iacr.org/2004/199/

Parallel SSH - What is that? http://www.theether.org/pssh

Sunday, August 22, 2004

My own server is now up and running at Home Linux Server.
Right now it does not have anything in it as I just finished installing Apache server on my server. Stay tuned to see new stuff there!

Sunday, August 1, 2004

Haydar Linux is full functional Arabic Linux based on Debian (?). But when I clicked on the link, there is no information where to download the link. Anybody knows where we can download it? I'll appreciate it.

Thanks.

Feather Linux http://featherlinux.berlios.de is lightweight Linux that fits into 64 MB USB pendrive or half-size CD. Try this out!
MS Longhorn: 3D Graphics Microsoft Longhorn Reinvents Desktop Graphics
From games to the desktop itself, 3D graphics will be everywhere in the new Windows Longhorn OS. We've got a sneak peek at the new Windows Graphics Foundation (WGF) architecture that will make it happen.
Linux Takes on Windows Gaming
Review: Formerly known as WineX, Cedega 4.0 offers hope for Linux users who want to play graphics-heavy Windows games like Far Cry and Battlefield Vietnam. But how well does it really work -- and is it worth the monthly subscription?
The Evolving 3D Graphics Landscape
Graphics cards have more of an impact on overall performance than ever before, but the vast array of choices can be confusing. We help you sort through the mess and decide how much video muscle you really need.
The Evolving PC Audio Landscape With Intel's HD Audio coming, Microsoft planning changes for Longhorn audio, and a software DVD-Audio shipping, PC sound is in flux. We examine the present (and future) of PC audio. The Evolving Memory Landscape
# A Demo for computing Polynomial.
# (C) 2004, The Seeker

# ------------ poly.h -----------------------

#ifndef POLYNOM_H
#define POLYNOM_H

#define MAX_POLYNOM_ELEMENTS     250

typedef struct {
int coef;
int pow_x;
int pow_y;
} PType;

typedef struct {
int     n;
PType    *poly;
//POLYNOM     *next;
} POLYNOM;

typedef struct {
POLYNOM        *prev;
POLYNOM        polynom;
POLYNOM        *next;
} POLYNOMLIST;


#endif
# ------------------ end of poly.h ----------------------------

#------- poly.c ------------------
#include 
#include 
#include 
#include "poly.h"

const char *delim = " ";

int Polynom_GetInput(POLYNOM *plnm)
{
char buf[1024];
int i,j,n;
div_t divr;
char *p;
int a[3*MAX_POLYNOM_ELEMENTS];

//printf("Enter your polynomial variables (it is sequence of triples: c pow_x pow_y)\n");
printf("Enter Polynoms = ");
gets(buf);
if (strlen(buf) == 0) {
plnm->n = 0;
plnm->poly = NULL;
return 0;
}
n = 0;
if ((p=strtok(buf, delim)) != NULL) {
// found the first input
n=1;
a[0] = atoi(p);
}
while ((p=strtok(NULL, delim)) != NULL && n<3*max_polynom_elements)> 0) {
/* that's all the input.  Now, ensure the input is repetition of triplets
*/
divr = div(n,3);
if (divr.rem == 0) {
// yes, it is a sequence of triplet
plnm->poly = (PType*)malloc(n/3 * sizeof(PType));
if (plnm->poly == NULL)
exit(1);
plnm->n = n/3;
for(i=0; ipoly[j].coef = a[i];
plnm->poly[j].pow_x = a[i+1];
plnm->poly[j].pow_y = a[i+2];
}
}
return 1;
}
}
return 0;
}

int Polynom_Copy(const POLYNOM *src, POLYNOM *dest)
{
// destination polynom should not NULL
if (src==NULL || dest==NULL) return 0;
dest->poly = (PType *)malloc(src->n * sizeof(PType));
if (dest->poly) {
memcpy(dest->poly, src->poly, src->n * sizeof(PType));
return 1;
}
return 0;
}


void Polynom_Free(POLYNOM *p)
{
if (!p) return;
if (p->poly) {
free(p->poly);
p->n = 0;
p->poly = NULL;
}
}


void Polynom_Print(const POLYNOM *p)
{
int i;
char strc[50], strx[50], stry[50];
short sign;
char strsign[5];
short first_time=1;
int coef;

if (!p) return;
strx[0] = '\0';
stry[0] = '\0';
for(i=0; in; i++) {
coef = p->poly[i].coef;
// ignore coef=0
// using temporary var for sign is slightly faster than accessing structure of pointer p
if (coef != 0) {
sign = ( coef < first_time =" 0;">poly[i].pow_x !=0 || p->poly[i].pow_y !=0))
strcpy(strc, strsign);
else
sprintf(strc, "%s%0d", strsign, abs(coef));
if (p->poly[i].pow_x == 0)
strcpy(strx, "");
else if (p->poly[i].pow_x == 1)
strcpy(strx, "x");
else
sprintf(strx, "x^%-d", p->poly[i].pow_x);
if (p->poly[i].pow_y == 0)
strcpy(stry, "");
else if (p->poly[i].pow_y == 1)
strcpy(stry, "y");
else
sprintf(stry, "y^%-d", p->poly[i].pow_y);
//sprintf("%u%s%s");
printf("%s%s%s", strc, strx, stry);
}
}
printf("\n");
}


int Polynom_Add(PType *result, const PType P1, const PType P2)
{
if (result == NULL)
return 0;
if ((P1.pow_x == P2.pow_x) && (P1.pow_y == P2.pow_y)) {
// p1 & p2 have the same order of x,y
result->coef = P1.coef + P2.coef;
return 1;  
}
else {
return 0;
}
}

void Polynom_Simplify(POLYNOM *p1, POLYNOM *p2)
{
int i,j,k;


}


int main(int argc, char *argv[])
{
POLYNOM p1;
int i=0;

while (Polynom_GetInput(&p1)) {
//for(i=0; i

Friday, May 28, 2004

Linux on the PS2 by John Littler -- As consoles increase in power and alternate operating systems increase in functionality and flexibility, it's ever more attractive to port your favorite free operating system. In the case of Sony's PlayStation 2, the company even encourages it. John Littler explores Linux on the PS2, including hardware, installation, upgrades, alternatives, and game programming.

coLinux: Linux for Windows Without Rebooting by KIVILCIM Hindistan -- Trying Linux just keeps getting easier. Knoppix and other live CDs let you take Linux with you on CD and USB keys, but you have to reboot to run your software. What about Windows users who want to use Linux in conjunction with their existing systems? KIVILCIM Hindistan explores the world of coLinux -- cooperative Linux.

Build Strings with { } by Jerry Peek -- Save typing by expanding strings at the shell prompt. Learn hot to use the {} pattern-expansion characters in this excerpt from Unix Power Tools, 2nd Edition.


Using and Customizing Knoppix by Robert Bernier -- Several Linux distributions boot directly from CD-ROMs. How many are usable in that state? How many are customizable in that state? Klaus Knopper's Knoppix is perhaps the best known of these distributions. Robert Bernier explains how to use Knoppix and how to customize your own self-booting distribution CD.

Variable Manipulation and Output by John Coggeshall -- John Coggeshall covers basic variable manipulation and output, including math operators and strings.

Basic PHP Syntax by John Coggeshall -- John Coggeshall covers basic PHP syntax, including variable usage, variable types, and how to print variables to the web browser.

Introduction to Socket Programming with PHP by Daniel Solin -- Daniel Solin uses a game analogy to show how PHP can be used to exchange data between two computers using network sockets.

Introduction to Socket Programming with PHP by Daniel Solin -- Daniel Solin uses a game analogy to show how PHP can be used to exchange data between two computers using network sockets.

An Introduction to Extreme Programming by chromatic -- When you look at it closely, Extreme Programming isn't really as extreme as it is logical. This introduction shows you the tenets of XP and its relationship to open source methods for writing software.

Tuesday, May 25, 2004

Some Interesting RFC Docs need to read


  • 2105 - Cisco Systems' Tag Switching Architecture Overview
  • 2104 - HMAC: Keyed-hashing for Message Authentication
  • 2095 - IMAP/POP AUTHorize Extension for Simple Challenge/Response
  • 2085 - HMAC-MD5 IP Authentication with Replay Prevention
  • 2083 - PNG (Portable Network Graphics) Spec. version 1.0
  • 2082 - RIP2 MD5 Authentication
  • 2080 - RIPng for IPv6
  • 2069 - An Extension to HTTP: Digest Access Authentication
  • 2068 - Hypertext Transfer Protocol -- HTTP/1.1
  • 2058 - Remote Authentication Dial In User Service (RADIUS)
  • 2046 - Multipurpose Internet Mail Extension (MIME) Part 2: Media Types
  • 2045 - Multipurpose Internet Mail Extension (MIME) Part 1: Format of Internet Message Bodies

Thursday, February 19, 2004

This is my first attempt to put something to blogger. Sorry, it has nothing in it for now.